1 Introduction

In this article we prove that if \(C_{N}\) is the cover time of the discrete torus of side-length \(N\) and dimension \(d\ge 3\) then \(C_{N}/(g(0)N^{d})-\log N^{d}\) (where \(g(\cdot )\) is the \(\mathbb{Z }^{d}\) Green function) tends in law to the Gumbel distribution as \(N\rightarrow \infty \), or in other words, that the fluctuations of the cover time are governed by the Gumbel distribution. This result was conjectured e.g. in [3, Chapter 7, p. 23]. We also construct a new stronger coupling of random walk in the torus and the model of random interlacements, thus improving a result from [25]. The coupling is of independent interest and is also used as a tool to prove the cover time result.

Cover times of finite graphs by simple random walk have been studied extensively, see e.g. [13, 8, 9, 15]. One important case is the cover time of the discrete torus \(\mathbb{T }_{N}=(\mathbb{Z }/N\mathbb{Z })^{d}\). Let \(P\) be the canonical law of continuous time simple random walk in this graph, starting from the uniform distribution, and let the canonical process be denoted by \((Y_{t})_{t\ge 0}\). The cover time of the discrete torus is the first time \(Y_{\cdot }\) has visited every vertex of the graph, or in other words

$$\begin{aligned} C_{N}=\max _{x\in \mathbb{T }_{N}}H_{x}, \end{aligned}$$
(1.1)

where \(H_{x}\) denotes the entrance time of the vertex \(x\in \mathbb{T }_{N}\). For \(d\ge 3\) it is known that \(E[C_{N}]\sim g(0)N^{d}\log N^{d}\), as \(N\rightarrow \infty \), and that \(C_N\) concentrates in the sense that \(\frac{C_{N}^{}}{g(0)N^{d}\log N^{d}}\rightarrow 1\) in probability, as \(N\rightarrow \infty \). However previously it was only conjectured that the finer scaling result

$$\begin{aligned} \frac{C_{N}}{g(0)N^{d}} -\log N^{d}\overset{{\mathrm{law}}}{\longrightarrow }G\quad \text{ as}\, N\rightarrow \infty ,\quad \text{ for}\,d\ge 3, \end{aligned}$$
(1.2)

holds, where \(G\) denotes the standard Gumbel distribution, with cumulative distribution function \(F(z)\!=\!e^{-e^{-z}}\) (see e.g. Chapter 7, pp. 22–23, [3]). In this article we prove (1.2). In fact our result, Theorem 3.1, proves more, namely that the (appropriately defined) cover time of any “large” subset of \(\mathbb{T }_{N}\) satisfies a similar relation. (For \(d\!=\!1,2\), the asymptotic behaviour of \(E[C_{N}]\) is different; see [9] for \(d\!=\!2\), the case \(d\!=\!1\) is an exercise in classical probability theory. The concentration result \(C_{N}/E[C_{N}]\!\rightarrow \!1\) still holds for \(d\!=\!2\), but the nature of the fluctuations is unknown. For \(d\!=\!1\) one can show that \(C_{N}/N^{2}\) converges in law to the time needed by Brownian motion to cover \(\mathbb{R }/\mathbb{Z }\).)

Our second main result is a coupling of random walk in the discrete torus and random interlacements, which we now introduce. To do so we very briefly describe the model of random interlacements (see Sect. 2 for more details). It was introduced in [23] and helps to understand “the local picture” left by the trace of a simple random walk in (among other graphs) the discrete torus when \(d\ge 3\). The random interlacement roughly speaking arises from a Poisson cloud of doubly infinite nearest-neighbour paths modulo time-shift in \(\mathbb{Z }^{d},d\ge 3\), with a parameter \(u\ge 0\) that multiplies the intensity. The trace (vertices visited) of all the paths at some intensity \(u\) is a random subset of \(\mathbb{Z }^{d}\) denoted by \(\mathcal{I }^{u}\). Together the \(\mathcal{I }^{u}\), for \(u\ge 0\), form an increasing family \((\mathcal{I }^{u})_{u\ge 0}\) of random subsets of \(\mathbb{Z }^{d}\). We call the family \((\mathcal{I }^{u})_{u\ge 0}\) a random interlacement and for fixed \(u\) we call the random set \(\mathcal{I }^{u}\) a random interlacement at level \(u\). Random interlacements are intimitaly related to random walk in the torus; intuitively speaking, the trace of \(Y_{\cdot }\) in a “local box” of the torus, when run up to time \(uN^{d}\), in some sense “looks like” \(\mathcal{I }^{u}\) intersected with a box (see [23, 26]).

Our coupling result is one way to make the previous sentence precise and can be formulated roughly as follows. Let \(Y(0,t)\) denote the trace of \(Y_{\cdot }\) up to time \(t\) (i.e. the set of vertices visited up to time \(t\)). For \(n\ge 1\) pick vertices \(x_{1},\ldots ,x_{n}\in \mathbb{T }_{N}\) and consider boxes \(A_{1},\ldots ,A_{n}\), defined by \(A_{i}=x_{i}+A,i=1,\ldots ,n,\) where \(A\subset \mathbb{Z }^{d}\) is a box centred at \(0\), of side-length such that the \(A_{1},\ldots ,A_{n}\) are “well separated” and at most “mesoscopic”. Then for a level \(u>0\) and a \(\delta >0\) (which may not be too small)

$$\begin{aligned}&\text{ one} \text{ can} \text{ construct} \text{ a} \text{ coupling} \text{ of} \text{ random} \text{ walk}\; Y_{\cdot }\ \text{ with} \text{ law}\,P \ \text{ and} \text{ independent} \nonumber \\&\text{ random} \text{ interlacements}\,\,(\mathcal{I }_{1}^{v})_{v\ge 0},\ldots ,(\mathcal{I }_{n}^{v})_{v\ge 0}, \text{ such} \text{ that} \text{``with} \text{ high} \text{ probability''}\\&\mathcal{I }_{i}^{u(1-\delta )}\cap A\subset (Y(0,uN^{d})-x_{i})\cap A\subset \mathcal{I }_{i}^{u(1+\delta )}\cap A\quad \text{ for} \,\,i=1,\ldots ,n. \end{aligned}$$
(1.3)

The result is stated rigorously in Theorem 3.2. The case \(n=1\) (i.e. coupling for one box) and \(u,\delta \) fixed (as \(N\rightarrow \infty \)) was obtained in [25] (and earlier a coupling of random walk in the so called discrete cylinder and random interlacements was constructed in [21, 22]). In this paper we strengthen the result from [25] by coupling random interlacements with random walk in many separated boxes (the most important improvement), and by allowing \(\delta \) to go to zero and \(u\) to go to zero or infinity. A similar improvement of the discrete cylinder coupling from [21, 22] can be found in [6].

A coupling of random interlacements and random walk is a very powerful tool to study the trace of random walk. In this article we use the above coupling to study certain properties of the trace relevant to the cover time result (1.2). Similar couplings have also been used to study the percolative properties of the complement of the trace of random walk in terms of random interlacements; in this case the relevant properties of the trace studied with the coupling are very different (see [25] for the torus, and [21, 22] for the discrete cylinder). We expect our coupling to find uses beyond the current cover time application. For more on this see Remark 4.6 (1).

We also prove two interesting corollaries of (1.2) (actually using the stronger subset version mentioned above) and (1.3). To formulate the first corollary we define the “point process of vertices covered last”, a random measure on the Euclidean torus \((\mathbb{R }/\mathbb{Z })^{d}\), by

$$\begin{aligned} \mathcal{N }_{N}^{z}=\sum \limits _{x\in \mathbb{T }_{N}}\delta _{x/N}1_{\{H_{x}>g(0)N^{d}\{\log N^{d}+z\}\}},\quad N\ge 1,z\in \mathbb{R }. \end{aligned}$$
(1.4)

Note that \(\mathcal{N }_{N}^{z}\) counts the vertices of \(\mathbb{T }_{N}\) which have not yet been hit at time \(g(0)N^{d}\{\log N^{d}+z\}\) (this is the “correct time-scale”; from (1.2) one sees that the probability that \(\mathcal{N }_{N}^{z}\) is the zero measure is bounded away from zero and one). The result is then that (for \(d\ge 3\))

$$\begin{aligned} \begin{aligned}&\mathcal{N }_{N}^{z}\mathop {\longrightarrow }\limits ^{\mathrm{law}}\mathcal{N }^{z}\text{ as}\,N\rightarrow \infty , \text{ where}\,\,\mathcal{N }^z \text{ is} \text{ a} \text{ Poisson} \text{ point} \text{ process}\\&\,\text{ on}\,(\mathbb{R }/\mathbb{Z })^{d} \text{ of} \text{ intensity}\,\,e^{-z}\lambda , \text{ and} \,\,\lambda \quad \text{ denotes} \text{ Lebesgue} \text{ measure.} \end{aligned} \end{aligned}$$
(1.5)

This is proven in Corollary 3.4. Intuitively speaking it means that the positions of the last vertices of the torus to be covered are approximately independent and uniform.

As a consequence of (1.5) we obtain Corollary 3.5 which says, intuitively speaking, that

$$\begin{aligned} \text{ the} \text{ last} \text{ few} \text{ vertices} \text{ of}\,\,\mathbb{T }_{N}\,\,\text{ to} \text{ be} \text{ covered} \text{ are} \text{ far} \text{ apart,} \text{ at} \text{ distance} \text{ of} \text{ order}\,N.\quad \end{aligned}$$
(1.6)

Note that a priori it is not clear if the correct qualitative picture is that the random walk completes covering of the torus by “taking out several neighbouring vertices at once” or if it “takes out the last few vertices one by one”. Roughly speaking (1.6) implies that the latter is the case.

We now discuss the proofs of the above results. The result (1.5) is proven using Kallenberg’s theorem, which allows one to verify convergence in law of certain point processes by checking only convergence of the intensity measure and convergence of the probability of “charging a set”. The latter two quantities will be shown to converge using (1.2) (or rather the subset version of this statement) and the coupling (1.3). The result (1.6) follows from (1.5), using a calculation involving Palm measures and the fact that the points of the limit Poisson point processes \(\mathcal{N }^{z}\) are “macroscopically separated”.

We now turn to the proof of (1.2). The method builds on the ideas of the works [5, 6], which contain the corresponding results for the so called cover levels of random interlacements ([5]) and the cover times of finite sets in the discrete cylinder ([6]). It is a well known “general principle” that entrance times of small sets in graphs often behave like exponential random variables. In the case of the torus the rescaled entrance time \(\frac{1}{N^d}H_x\) of a vertex \(x\in \mathbb{T }_{N}\) is approximately exponential with parameter \(\frac{1}{g(0)}\) (this can, for instance, be proven with a good quantitative bound using the coupling (1.3), see Lemma 3.3). Thus, in view of (1.1), we see that the rescaled cover time \(\frac{1}{N^d}C_N\) is the maximum of identically distributed exponential random variables with parameter \(\frac{1}{g(0)}\). If the \(H_{x},x\in \mathbb{T }_{N}\), were also independent then standard extreme value theory (or a simple direct calculation of the distribution function of the maximum of i.i.d. exponentials) would give (1.2). But clearly the \(H_{x},x\in \mathbb{T }_{N}\), are not even approximately independent, (for example if \(x,y\in \mathbb{T }_{N}\) are neighbouring vertices then \(H_{x}\) and \(H_{y}\) are highly dependent). There are theorems that give distributional results for the maxima of random fields with some sufficiently weak dependence (see [13, 16]) but these do not apply to the random field \((H_{x})_{x\in \mathbb{T }_{N}}\) because the dependence is too strong (using (1.3) one can show that correlation between \(1_{\{H_{x}>uN^{d}\}}\) and \(1_{\{H_{y}>uN^{d}\}}\) decays as \(\frac{c(u)}{(d(x,y))^{d-2}}\), see (1.68) of [23]).

However for sets \(F\subset \mathbb{T }_{N}\) that consist of isolated vertices \(x_{1},\ldots ,x_{n}\), that are “well-separated”, it turns out that using (1.3) one can show that \(H_{x_{1}},\ldots ,H_{x_{n}}\), are approximately independent (see Lemma 4.3). By comparing to the independent case, we will therefore be able to show that for such \(F\)

$$\begin{aligned} \max _{x\in F}H_{x}\,\,\text{ has} \text{ law} \text{ close} \text{ to}\,\, g(0)N^{d}\{\log |F|+G\}, \end{aligned}$$
(1.7)

where \(G\) is a standard Gumbel random variable. In particular it will turn out that for such \(F\), roughly speaking, the distribution of \(\max _{x\in F}H_{x}\) essentially depends only on the cardinality of \(F\).

To enable the proof of (1.2) we will introduce the set of “\((1-\rho )-\)late points” \(F_{\rho }\) defined as the vertices of \(\mathbb{T }_{N}\) that have not been hit at time \(t(\rho )=(1-\rho )g(0)N^{d}\log N^{d}\), for a fixed but small \(0<\rho <1\). Note that this is a \((1-\rho )\) fraction of the “typical” cover time \(g(0)N^{d}\log N^{d}\). By the Markov property \(Y_{t(\rho )+\cdot }\) is a random walk, and using a mixing argument one can show that for our purposes it is basically independent from the random walk \((Y_{t})_{0\le t\le t(\rho )}\), so that the law of \(C_{N}\) is approximately the law of \(t(\rho )+\max _{x\in F^{\prime }}H_{x}\), where \(F^{\prime }\) is a random set which is independent of the random walk \(Y_{\cdot }\), but has the law of \(F_{\rho }\).

Furthermore we will be able to show, using (1.3), that “with high probability” \(F^{\prime }\) (and \(F_{\rho }\)) consists of “well-separated” vertices, and that the cardinality of \(F^{\prime }\) concentrates around its expected value, which is close to \(|\mathbb{T }_{N}|^{\rho }=N^{d\rho }\). Thus as long as \(F^{\prime }\) is “typical”, in the sense that it is well-separated and has cardinality close to \(N^{d\rho }\), we will “know” that \(\max _{x\in F^{\prime }}H_{x}\) has law close to \(g(0)N^{d}\{\rho \log N^{d}+G\}\) (see (1.7)). Adding the deterministic time \(t(\rho )\) the \(\rho \) will cancel and we will get that \(C_{N}\) has law close to \(g(0)N^{d}\{\log N^{d}+G\}\), which is essentially speaking the claim in (1.2).

We turn finally to the proof of the coupling (1.3). It roughly speaking adapts the “poissonization” method used for the case \(n=1\) in [25] to the case \(n>1\), and combines it with a decoupling technique from [24].

The first step is to consider the excursions of \(Y_{\cdot }\), that is the pieces of path \(Y_{(R_{k}+\cdot )\wedge U_{k},k=1,2,\ldots }\) where \(R_{k}\) and \(U_{k}\) are recursively defined, \(U_{0}=0\), \(R_{k}\) is the first time \(Y_{\cdot }\) enters \(A_{1}\cup \cdots \cup A_{n}\) after time \(U_{k-1}\) and \(U_{k}\) is the first time after \(R_{k}\) that the random walk has spent “a long time far away from \(A_{1}\cup \cdots \cup A_{n}\)” (see (5.4)). By proving, using a mixing argument, that the distribution of \(Y_{U_{k}}\) is close to a certain probability distribution on \(\mathbb{T }_{N}\) known as the quasistationary distribution, regardless of the value of \(Y_{R_{k}}\), we will be able to couple the excursions \(Y_{(R_{k}+\cdot )\wedge U_{k},k=1,2,\ldots }\) with independent excursions \(\tilde{Y}^{1},\tilde{Y}^{2},\ldots \) that have the law of \(Y_{\cdot \wedge U_{1}}\), when \(Y_{\cdot }\) starts from the quasistationary distribution, such that “with high probability” the traces of \(Y_{(R_{k}+\cdot )\wedge U_{k}}\) and \(\tilde{Y}^{k}\) in \(A_{1}\cup \cdots \cup A_{n}\) coincide.

We will then collect a Poisson number of such independent excursions in a point process \(\mu \) (in fact two different point processes, but for the purpose of this discussion let us ignore this) which will be such that the trace of \(\mu \) in \(A_{1}\cup \cdots \cup A_{n}\) with high probability coincides with the trace of the random walk \(Y_{\cdot }\) run up to time \(uN^{d}\) in that set. Because of the way we construct the point process \(\mu \), it will be a Poisson point process on the space of paths in \(\mathbb{T }_{N}\) of a certain intensity related to the law of \(\tilde{Y}^{1}\). This will complete the step that we refer to as “poissonization”.

We will see that an “excursion” in the Poisson point process \(\mu \) may visit several of the boxes \(A_1,\ldots ,A_n\), and, roughly speaking, “feels the geometry of the torus”, since it may wind its way all around it before the time \(U\). To deal with this we in essence split the excursions of \(\mu \) into the pieces of excursion between successive returns to the set \(A_{1}\cup \cdots \cup A_{n}\) and successive departures from \(B_{1}\cup \cdots \cup B_{n}\), where the \(B_{i}\supset A_{i}\) are larger boxes (still disjoint and at most mesoscopic), and use a decoupling technique to remove the dependence between pieces from the same excursion.

We then collect these, now independent, pieces of excursion into a point processes which we will be able to couple with a Poisson point processes \(\nu \) (in fact two independent Poisson point processes) on the space of random paths in the torus, such that with high probability the trace of \(\nu \) in \(A_{1}\cup \cdots \cup A_{n}\) coincides with the trace of \(\mu \) (and therefore also with the trace of the random walk \(Y_{\cdot }\) run up to time \(uN^{d}\)) in that set. The “excursions” of \(\nu \) start in a box \(A_{i}\) and end upon leaving \(B_{i}\supset A_{i}\) (which are disjoint), so they visit only one box and do not “feel the geometry of the torus” since \(B_{i}\) can be identified with a subset of \(\mathbb{Z }^{d}\).

Also since \(\nu \) is a Poisson point process we will see that we can split it into \(n\) independent Poisson point processes, one for each box \(A_{i}\), such that roughly speaking the trace of \(\nu _{i}\) in \(A_{i}\) coincides with high probability with that of \(Y_{\cdot }\) (run up to time \(uN^{d}\)) in \(A_{i}\). Thus we will have “decoupled” the boxes.

Now, as mentioned above, random interlacements are constructed from a “Poisson cloud” on a certain space on paths, and we will see that when restricted to a box \(A_{i}\), a random interlacement has the law of the trace of a Poisson number of random walks. This will also basically be the law of trace of the \(\nu _{i}\), with the difference that the paths in \(\nu _{i}\) do not return to \(A_{i}\) after leaving \(B_{i}\), while for random interlacements a small but positive proportion the paths do return. By taking a small number of the paths from the \(\nu _{i}\), and “gluing them together” to form paths that do return to \(A_{i}\) after leaving \(B_{i}\), we will be able to construct from the \(\nu _{i}\) the random interlacements \((\mathcal{I }_{i}^{v})_{v\ge 0}\) in (1.3).

We now describe how this article is organized. In Sect. 2 we introduce some notation and preliminary lemmas. In Sect. 3 we give the formal statements of the main theorems (1.2) and (1.3), and of corollaries (1.5) and (1.6). We also derive the corollaries from the main theorems. The proof of the cover time result (1.2), from (1.3), is contained in Sect. 4. The subsequent sections deal with the proof of (1.3). In Sect. 5 we introduce three further couplings and use them to construct the coupling (1.3). The first of the three, a coupling of the excursions \(Y_{(R_{k}+\cdot )\wedge U_{k}}\) with the Poisson point process \(\mu \), is then constructed in Sects. 6 and 7. In Sect. 6 we define the quasistationary distribution and prove that the law of \(Y_{U_{k}}\) is close to it. In Sect. 7 we use this fact to construct the coupling of the excursions \(Y_{(R_{k}+\cdot )\wedge U_{k}}\) with the Poisson point process \(\mu \). The second of the three couplings, a coupling of \(\mu \) and the i.i.d. Poisson point processes \(\nu _{1},\ldots ,\nu _{n}\), is constructed in Sect. 8. The third coupling, a coupling of a Poisson point process \(\nu \) with the law of the \(\nu _{i}\) and random interlacements, is constructed in Sect. 9. The appendix contains the proof of a certain lemma (Lemma 6.2) which is stated in Sect. 6.

We finish this section with a remark on constants. Unnamed constants are represented by \(c\), \(c^{\prime }\), etc. Note that these may represent different constants in different formulas or even within the same formula. Named constants are denoted by \(c_{4},c_{5},\ldots \) and have fixed values. All constants are understood to be positive and, unless stated otherwise, depend only on the dimension \(d\). Dependence on e.g. a parameter \(\alpha \) is denoted by \(c(\alpha )\) or \(c_{4}(\alpha )\).

2 Notation and preliminary lemmas

In this section we introduce basic notation and a few preliminary lemmas.

We write \([x]\) for the integer part of \(x\in [0,\infty )\). The cardinality of a set \(U\) is denoted by \(|U|\).

We denote the \(d-\)dimensional discrete torus of side length \(N\ge 3\) by \(\mathbb{T }_{N}=(\mathbb{Z }/N\mathbb{Z })^{d}\) for \(d\ge 1\). If \(x\in \mathbb{Z }^{d}\) we write \(|x|\) for the Euclidean norm of \(x\) and \(|x|_{\infty }\) for the \(l_{\infty }\) norm of \(x\). We take \(d(\cdot ,\cdot )\) to mean the distance on \(\mathbb{T }_{N}\) induced by \(|\cdot |\) and \(d_{\infty }(\cdot ,\cdot )\) to mean the distance induced by \(|\cdot |_{\infty }\). The closed \(l_{\infty }-\)ball of radius \(r\ge 0\) with centre \(x\) in \(\mathbb{Z }^{d}\) or \(\mathbb{T }_{N}\) is denoted by \(B(x,r)\).

We define the inner and outer boundaries of a set \(U\subset \mathbb{Z }^{d}\) or \(U\subset \mathbb{T }_{N}\) by

$$\begin{aligned} \partial _{i}U=\{x\in U:d(x,U^{c})=1\},\quad \partial _{e}U=\{x\in U^{c}:d(x,U)=1\}. \end{aligned}$$
(2.1)

For a set \(U\) we write \(\Gamma (U)\) for the space of all cadlag piecewise constant functions from \([0,\infty )\) to \(U\), with at most a finite number of jumps in any compact interval. When only finitely many jumps occur for a function \(w\in \Gamma (U)\) we set \(w(\infty )=\lim _{t\rightarrow \infty }w(t)\). Usually we will work with \(\Gamma (\mathbb{Z }^{d})\) or \(\Gamma (\mathbb{T }_{N})\). We write \(Y_{t}\) for the canonical process on \(\Gamma (\mathbb{Z }^{d})\) or \(\Gamma (\mathbb{T }_{N})\). When \(w\in \Gamma (\mathbb{Z }^{d})\) or \(w\in \Gamma (\mathbb{T }_{N})\) we take \(w(a,b)\) to mean the range \(\{w(t):t\in [a,b]\cap [0,\infty )\}\subset \mathbb{Z }^{d}\) or \(\mathbb{T }_{N}\) (with this definition the range is empty if \(b<0\) or \(a>b\)). We let \(\theta _{t}\) denote the canonical shift on \(\Gamma (\mathbb{Z }^{d})\) and \(\Gamma (\mathbb{T }_{N})\). The jump times of \(Y_{t}\) are defined by

$$\begin{aligned} \tau _{0}=0,\tau _{1}=\inf \{t\ge 0:Y_{t}\ne Y_{0}\}\quad \text{ and}\quad \tau _{n}=\tau _{1}\circ \theta _{\tau _{n-1}}+\tau _{n-1},n\ge 2. \end{aligned}$$
(2.2)

Due to the usual interpretation of the infimum of the empty set, \(\tau _{m}=\infty \) for \(m>n\) when only \(n\) jumps occur. For a set \(U\subset \mathbb{Z }^{d}\) or \(\mathbb{T }_{N}\) we define the entrance time, return time and exit time by

$$\begin{aligned} H_{U}=\inf \{t\ge 0:Y_{t}\in U\},\tilde{H}_{U}=\inf \{t\ge \tau _{1}:Y_{t}\in U\},T_{U}=\inf \{t\ge 0:Y_{t}\notin U\}.\nonumber \\ \end{aligned}$$
(2.3)

We let \(P_{x}^{\mathbb{Z }^{d}}\) denote law on \(\Gamma (\mathbb{Z }^{d})\) of continuous time simple random in \(\mathbb{Z }^{d}\) and let \(P_{x}\) denote the law on \(\Gamma (\mathbb{T }_{N})\) of continuous time simple random walk on \(\mathbb{T }_{N}\) (so that \(\tau _{1}\) is an exponential random variable with parameter \(1\)). If \(\nu \) is a measure on \(\mathbb{Z }^{d}\) we let \(P_{\nu }^{\mathbb{Z }^{d}}=\sum _{x\in \mathbb{Z }^{d}} \nu (x)P_{x}^{\mathbb{Z }^{d}}\). We define \(P_{\nu }\) analogously. Furthermore \(\pi \) denotes the uniform distribution on \(\mathbb{T }_{N}\), and \(P\) denotes \(P_{\pi }\), i.e. the law of simple random walk starting from the uniform distribution.

Essentially because the mixing time of the torus is of order \(N^{2}\) (see Proposition 4.7, p. 50, and Theorem 5.5, p. 66 in [14]) we have that

$$\begin{aligned} \begin{array}{l} \text{ for}\,\,N\!\ge \!3,\lambda \!\ge \!1,x\!\in \!\mathbb{T }_{N},\,\,\text{ a} \text{ coupling}\,\,q(w,v),w,v\!\in \!\mathbb{T }_{N} \ \text{ exists} \text{ for} \text{ which} \text{ the} \text{ first}\\ \text{ marginal} \text{ is} \,\,P_{x}[Y_{\lambda N^{2}}\in dw],\,\,\text{ the} \text{ second} \text{ is} \text{ uniform,} \text{ and}\,\,\sum _{w\ne v}q(w,v)\!\le \! ce^{-c\lambda }. \end{array}\nonumber \\ \end{aligned}$$
(2.4)

The Green function is defined by

$$\begin{aligned} g(x,y)=\int _{0}^{\infty }P_{x}^{\mathbb{Z }^{d}}[Y_{t}=y]dt,x,y\in \mathbb{Z }^{d},\quad \text{ and}\quad g(\cdot )=g(0,\cdot ). \end{aligned}$$

We have the following classical bounds on \(g(x,y)\) (see Theorem 1.5.4, p. 31 in [12])

$$\begin{aligned} c|x-y|^{2-d}\le g(x,y)\le c^{\prime }|x-y|^{2-d}\quad \text{ for}\,\,x,y\in \mathbb{Z }^{d},x\ne y,d\ge 3.\nonumber \\ \end{aligned}$$
(2.5)

For \(K\subset \mathbb{Z }^{d}\) we define the equilibrium measure \(e_{K}\) and the capacity \(\text{ cap}(K)\) by

$$\begin{aligned} e_{K}(x)=P_{x}^{\mathbb{Z }^{d}}[\tilde{H}_{K}=\infty ]1_{K}(x) \quad \mathrm{and} \quad \mathrm{cap} \quad (K)=\sum \limits _{x\in \partial _{i}K}e_{K}(x). \end{aligned}$$
(2.6)

It is well-known (see (2.16), Proposition 2.2.1 (a), p. 52-53 in [12]) that

$$\begin{aligned} cr^{d-2}\le \text{ cap}\,\,(B(0,r))\le c^{\prime }r^{d-2}\quad \text{ for}\,\,r\ge 1,d\ge 3. \end{aligned}$$
(2.7)

The normalised equilibrium distribution \(\frac{e_{K}(\cdot )}{\mathrm{cap}(A)}\) can be thought of as the hitting distribution on \(K\) when “starting from infinity”, and in this paper we will use that for all \(K\subset B(0,r)\subset \mathbb{Z }^{d}\),\(r\ge 1\), we have (see Theorem 2.1.3, Exercise 2.1.4 and (2.13) in [12])

$$\begin{aligned} c_{1} \frac{e_{K}(y)}{\mathrm{cap (K)}} \le P_{x} [Y_{H_{K}}=y|H_{K}< \infty ] \le c_2 \frac{e_{K}(y)}{\mathrm{cap(K)}}\quad \text{ for} \text{ all} \, y\in K, x\notin B(0,c_3 r).\nonumber \\ \end{aligned}$$
(2.8)

If \(K\subset U\subset \mathbb{Z }^{d}\), with \(K\) finite, we define the equilibrium measure and capacity of \(K\) relative to \(U\) by

$$\begin{aligned} e_{K,U}(x)=P_{x}^{\mathbb{Z }^{d}}[\tilde{H}_{K}>T_{U}]1_{K}(x)\quad \text{ and} \,\,\text{ cap}_{U}(K)=\sum \limits _{x\in \partial _{i}K}e_{K,U}(x). \end{aligned}$$
(2.9)

We will need the following bounds on the probability of hitting sets in \(\mathbb{Z }^{d}\) and \(\mathbb{T }_{N}\).

Lemma 1.1

\((d\ge 3,N\ge 3)\)

$$\begin{aligned}&P_{x}^{\mathbb{Z }^{d}}[H_{B(0,r_{1})}<\infty ]\le c(r_{1}/r_{2})^{d-2}\quad \text{ for} \,1\le r_{1}\le r_{2},x\notin B(0,r_{2}).\qquad \end{aligned}$$
(2.10)
$$\begin{aligned}&\underset{x\notin B(0,r_{2})}{\sup }\!P_{x}[H_{B(0,r_{1})}\!<\! N^{2+\lambda }]\le c(\lambda )(r_{1}/r_{2})^{d-2}\,\quad \text{ for} \,1\le r_{1}\le r_{2}\le N^{1-3\lambda },\lambda >0.\nonumber \\ \end{aligned}$$
(2.11)

Proof

(2.10) follows from Proposition 2.2.2, p. 53 in [12] and (2.7). To prove (2.11) we let \(K=\cup _{y\in \mathbb{Z }^{d},|y|_{\infty }\le N^{\lambda }}B(yN,r_{1})\subset \mathbb{Z }^{d}\) and note that by “unfolding the torus” we have

$$\begin{aligned} \sup _{z\notin B(0,r_{2})}P_{z}[H_{B(0,r_{1})}<N^{2+\lambda }]&\le \sup _{z\in B(0,\frac{N}{2})\backslash B(0,r_{2})}P_{z}^{\mathbb{Z }^{d}}[H_{K}<\infty ]\nonumber \\&+\, P_{0}^{\mathbb{Z }^{d}}[T_{B(0,\frac{N^{1+\lambda }}{2})}\le N^{2+\lambda }], \end{aligned}$$
(2.12)

provided \(N\ge c(\lambda )\). For any \(z\in B(0,\frac{N}{2})\backslash B(0,r_{2})\)

$$\begin{aligned} P_{z}^{\mathbb{Z }^{d}}[H_{K}<\infty ] \le \sum \limits _{|y|_{\infty }\le 1}P_{z}^{\mathbb{Z }^{d}}[H_{B(yN,r_{1})}<\infty ] + \sum \limits _{1<|y|_{\infty }\!\le \! N^{\lambda }}P_{z}^{\mathbb{Z }^{d}}[H_{B(yN,r_{1})}<\infty ]\\&\mathop {\le }\limits ^{(2.10)} c\left(r_{1}/r_{2}\right)^{d-2}+cN^{\lambda d}(r_{1}/N)^{d-2}\le c(r_{1}/r_{2})^{d-2}, \end{aligned}$$

since \(N^{\lambda d}/N^{d-2}\overset{d\ge 3}{\le }1/N^{(1-3\lambda )(d-2)}\overset{r_{2}\le N^{1-3\lambda }}{\le }1/r_{2}^{d-2}\). Furthermore

$$\begin{aligned} P_{0}^{\mathbb{Z }^{d}}\Big [T_{B(0,\frac{N^{1+\lambda }}{2})}\le N^{2+\lambda }\Big ]&\le P_{0}^{\mathbb{Z }^{d}}\Big [|Y_{\tau _{n}}|>\frac{N^{1+\lambda }}{2}\quad \text{ for} \text{ an} \,\,n\le 2N^{2+\lambda }\Big ]\nonumber \\&+P_{0}^{\mathbb{Z }^{d}}\Big [\tau _{[2N^{2+\lambda }]}\le N^{2+\lambda }\Big ]. \end{aligned}$$

But by applying Azuma’s inequality (Theorem 7.2.1, p. 99 in [4]) to the martingale \(Y_{\tau _{n}}\) we get that the first probability on the right-hand side is bounded above by \(cN^{c}e^{-cN^{\lambda }}\), and by a standard large deviations bound the second probability is bounded above by \(e^{-cN^{2+\lambda }}\), so since \(cN^{c}e^{-cN^{\lambda }}\le (r_{1}/r_{2})^{d-2}\) for \(N\ge c(\lambda )\) we get (2.11). \(\square \)

Using Lemma 2.1 we get the following bounds for equilibrium measures.

Lemma 2.2

\((d\ge 3,N\ge 3)\)

$$\begin{aligned}&e_{K}(x)\ge cr^{-1},\quad \text{ for}\,\,x\in \partial _{i}K,\,\,\text{ where}\,\,K=B(0,r),r\ge 1.\end{aligned}$$
(2.13)
$$\begin{aligned}&e_{K}(x)\le e_{K,U}(x),\quad \text{ for} \text{ all}\,\,x\in K\subset U\subset B\left(0,\frac{N}{4}\right)\subset \mathbb{T }_{N}. \end{aligned}$$
(2.14)

Furthermore if \(K\subset B(0,r)\subset U=B(0,r^{1+\lambda })\subset B(0,\frac{N}{4})\subset \mathbb{T }_{N},r\ge 1,\lambda >0,\) we have

$$\begin{aligned} e_{K,U}(x)\le (1+c(\lambda )r^{-\lambda })e_{K}(x),\quad \text{ for} \text{ all}\,\,x\in K. \end{aligned}$$
(2.15)

Proof

For a large enough constant \(c^{\prime }\) we have \(\inf _{x\in \partial _{i}B(0,c^{\prime }r)}P_{x}^{\mathbb{Z }^{d}}[H_{B(0,r)}=\infty ]\ge \frac{1}{2}\) (by (2.10)), so (2.13) follows from \(\inf _{x\in \partial _{i}B(0,r)}P_{x}[\tilde{H}_{K}>T_{B(0,c^{\prime }r)}]\ge cr^{-1}\) (which is a result of a one dimensional simple random walk estimate) and the strong Markov property. The second inequality (2.14) is obvious from (2.6) and (2.9). Finally for (2.15) note that \(e_{K,U}(x)=e_{K}(x)+P_{x}^{\mathbb{Z }^{d}}[T_{U}<\tilde{H}_{K}<\infty ]\) (by (2.6) and (2.9)), and \(P_{x}^{\mathbb{Z }^{d}}[T_{U}\!\!<\tilde{H}_{K}\!\!<\infty ]\le P_{x}^{\mathbb{Z }^{d}}[T_{U}<\tilde{H}_{K}]\sup _{x\in \partial _{e}U}P_{x}^{\mathbb{Z }^{d}}[H_{K}<\infty ]\le cr^{-\lambda }P_{x}^{\mathbb{Z }^{d}}[T_{U}<\tilde{H}_{K}]\) (by (2.10)). Now \(\inf _{x\in \partial _{e}U}P_{x}^{\mathbb{Z }^{d}}[H_{K}=\infty ]\) is always positive and at least \(\frac{1}{2}\) when \(r\ge c(\lambda )\) (by (2.10)), so in fact \(P_{x}^{\mathbb{Z }^{d}}[T_{U}<\tilde{H}_{K}<\infty ]\le c(\lambda )r^{-\lambda }P_{x}^{\mathbb{Z }^{d}}[T_{U}<\tilde{H}_{K}]\inf _{x\in \partial _{e}U}P_{y}^{\mathbb{Z }^{d}}[H_{K}=\infty ]\le c(\lambda )r^{-\lambda }e_{K}(x)\), so (2.15) follows. \(\square \)

We now introduce some notation related to Poisson point processes. Let \(\Gamma =\Gamma (\mathbb{T }_{N})\) or \(\Gamma =\Gamma (\mathbb{Z }^{d})\). When \(\mu \) is a Poisson point process on \(\Gamma ^{i},i\ge 1,\) we denote the trace of \(\mu \) by

$$\begin{aligned} \mathcal{I }(\mu )=\bigcup \limits _{(w_{1},\ldots w_{i})\in \mathrm{Supp}(\mu )}\; \bigcup \limits _{j=1}^{i}\;w_{i}(0,\infty )\subset \mathbb{Z }^{d}\quad \text{ or} \quad \mathbb{T }_{N}. \end{aligned}$$
(2.16)

We will mostly consider Poisson point processes \(\mu \) on \(\Gamma \) where this simplifies to \(\mathcal{I }(\mu )=\) \(\bigcup _{w\in \mathrm{Supp}(\mu )}w(0,\infty )\), but in Sects. 8 and 9 we will also consider Poisson point processes \(\mu \) on \(\Gamma ^{i}\!,i\ge 2.\) If \(\mu \) is a Poisson point process of labelled paths (that is a Poisson point process on \(\Gamma \times [0,\infty )\), where \(\Gamma \) is as above) we denote the trace up to label \(u\) by

$$\begin{aligned} \mathcal{I }^{u}(\mu )=\mathcal{I }(\mu _{u})\subset \mathbb{Z }^{d}\,\, \text{ or}\,\,\mathbb{T }_{N},\,\,\text{ where}\,\,\mu _{u}(dw)=\mu (dw\times [0,u]). \end{aligned}$$
(2.17)

Next let us recall some facts about random interlacements. They are, roughly speaking, defined as a Poisson point process on a certain space of labelled doubly-infinite trajectories modulo time-shift. The random interlacement at level \(u\), or \(\mathcal{I }^{u}\subset \mathbb{Z }^{d}\), is the trace of trajectories with labels up to \(u\ge 0\). The family of random subsets \((\mathcal{I }^{u})_{u\ge 0}\) is called a random interlacement. The rigorous definitions and construction that make this informal description precise can be found in Section 1 of [23] or Section 1 of [20]. In this article we will only use the facts (2.18)–(2.23) which now follow.

$$\begin{aligned} \begin{aligned}&\text{ There} \text{ exists} \text{ a} \text{ space}\,\,(\Omega _{0},\mathcal{A }_{0},Q_{0})\ \text{ and} \text{ a} \text{ family}\,\,(\mathcal{I }^{u})_{u\ge 0} \,\text{ of} \text{ random} \text{ subsets}\\&\text{ of}\,\,\mathbb{Z }^{d}\text{ such} \text{ that}\,\,(\mathcal{I }^{u}\cap K)_{u\ge 0}\overset{\mathrm{law}}{=}(\mathcal{I }^{u}(\mu _{K})\cap K)_{u\ge 0}\,\,\text{ for} \text{ all} \text{ finite}\,\,K\subset \mathbb{Z }^{d}, \end{aligned}\qquad \end{aligned}$$
(2.18)
$$\begin{aligned}&\text{ where}\,\,\mu _{K}\text{ is} \text{ a} \text{ Poisson} \text{ point} \text{ process} \text{ on}\,\,\Gamma (\mathbb{Z }^{d})\times [0,\infty )\,\, \text{ of} \text{ intensity}\,\,P_{e_{K}}^{\mathbb{Z }^{d}}\otimes \lambda ,\nonumber \\ \end{aligned}$$
(2.19)

and \(\lambda \) denotes Lebesgue measure (see (0.5), (0.6), (0.7) in [23], also cf. (1.67) in [23]).

(From (2.18) and (2.19) we see that to “simulate” \(\mathcal{I }^{u}\cap K\) for a finite \(K\subset \mathbb{Z }^{d}\) one simply samples an integer \(n\ge 0\) according to the Poisson distribution with parameter \(u\text{ cap}(K)\), picks \(n\) random starting points \(Z_{1},\ldots ,Z_{n}\) according to the normalized equilibrium distribution \(\frac{e_{K}(\cdot )}{\text{ cap}(K)}\) and runs from each starting point \(Z_{i}\) an independent random walk, recording the trace the walks leave in the set \(K\). The “full” random interlacement \(\mathcal{I }^u\) can be seen, intuitively speaking, as a “globally consistent” version of the traces of the Poisson point processes \(\mu _K\).)

Random interlacements also satisfy (see above (0.5), (0.7), below (0.8) and (1.48) in [23])

$$\begin{aligned}&\text{ the} \text{ law} \text{ of}\,\,\mathcal{I }^{u}\,\, \text{ under} \,\,Q_{0}\,\,\text{ is} \text{ translation} \text{ invariant} \text{ for} \text{ all} \,\,u\ge 0,\end{aligned}$$
(2.20)
$$\begin{aligned}&\mathcal{I }^{u}\,\,\text{ is} \text{ increasing} \text{ in} \text{ the} \text{ sense} \text{ that} \,\,Q_{0}-\text{ almost} \text{ surely} \,\,\mathcal{I }^{v}\!\subset \!\mathcal{I }^{u}\,\, \text{ for} \,\,v\!\le \! u,\end{aligned}$$
(2.21)
$$\begin{aligned}&\text{ if} \,\,\mathcal{I }_{1}^{u}\,\, \text{ and} \,\,\mathcal{I }_{2}^{v} \text{ are} \text{ independent} \text{ with} \text{ the} \text{ laws} \text{ under} \,\,Q_{0}\,\, \text{ of} \,\,\mathcal{I }^{u}\,\, \text{ and} \,\,\mathcal{I }^{v}\\&\text{ respectively,} \text{ then} \,\,(\mathcal{I }_{1}^{u},\mathcal{I }_{1}^{u}\cup \mathcal{I }_{2}^{v}) \text{ has} \text{ the} \text{ law} \text{ of} \,\,(\mathcal{I }^{u},\mathcal{I }^{u+v}) \text{ under} \,\,Q_{0}. \nonumber \end{aligned}$$
(2.22)

Finally (see (1.58) and (1.59) of [23])

$$\begin{aligned} Q_{0}[x\,{\notin }\,\mathcal{I }^{u}]=\exp \left(-\frac{u}{g(0)}\right)\;\;\,\text{ and}\,\;\; Q_{0}[x,y\,{\notin }\,\mathcal{I }^{u}]=\exp \left(\!-\!\frac{2u}{g(0)+g(x-y)}\right),x,y\in \mathbb{Z }^{d}.\nonumber \\ \end{aligned}$$
(2.23)

(The properties (2.21)–(2.23) in fact also follow from (2.18) and (2.19)).

The following lemma, which follows from Lemma 1.5 from [6] (by letting \(a\) in (1.39) in [6] go to infinity, and using (2.23)), will be crucial in our proof of (1.2).

Lemma 2.3

\((d\ge 3)\) There is a constant \(c_4 >1\) such that or any \(K\subset \mathbb{Z }^{d}\) with \(0\notin K\)

$$\begin{aligned} \sum \limits _{v\in K}Q_{0}[0,v\notin \mathcal{I }^{u}] \le |K|\left(Q_{0}[0\notin \mathcal{I }^{u}]\right)^{2}\{1\!+\!cu\} + ce^{-c_{4}\frac{u}{g(0)}},\quad \text{ for} \text{ all} \,u\!\ge \!0.\nonumber \\ \end{aligned}$$
(2.24)

Finally we define the cover time \(C_{F}\) of a set \(F\subset \mathbb{T }_{N}\) by

$$\begin{aligned} C_{F}\mathop {=}\limits ^{\mathrm{def}} \max _{x\in F}H_{x}=\inf \{t\ge 0:F\subset Y(0,t)\}. \end{aligned}$$
(2.25)

Note that \(C_{N}=C_{\mathbb{T }_{N}}\), cf. (1.1). For convenience we introduce the notation

$$\begin{aligned} u_{F}(z)=g(0)\{\log |F|+z\}, \end{aligned}$$
(2.26)

so that \(\{\frac{C_{N}}{g(0)N^{d}}-\log N^{d}\le z\}=\{C_{N}\le u_{\mathbb{T }_{N}}(z)N^{d}\}\), cf. (1.2).

3 Gumbel fluctuations, coupling with random interlacements and corollaries

In this section we state our two main theorems and derive two corollaries. The first main result is Theorem 3.1, which, roughly speaking, says that the cover times of large subsets of the torus (for \(d\ge 3\)) have Gumbel fluctuations, (and implies (1.2) from the introduction). The second main result is Theorem 3.2 and it states the coupling of random walk in the torus and random interlacements (see (1.3) in the introduction). The proofs of the theorems are contained in the subsequent sections.

The first corollary is Corollary 3.4 which in essence says that the “point process of vertices covered last” (see (1.4)) converges in law to a Poisson point process as the side length of the torus goes to infinity (see (1.5)). The second corollary is Corollary 3.5 and roughly says that for any fixed \(k\ge 1\) the last \(k\) vertices to be hit by the random walk are far apart, at distance of order \(N\).

We now state our result about fluctuations of the cover time. Recall the notation from (2.25) and (2.26).

Theorem 3.1

(\(d\ge 3\),\(N\ge 3\)) For all \(F\subset \mathbb{T }_{N}\) we have

$$\begin{aligned} \sup _{z\in \mathbb{R }}\left|P[C_{F}\le N^{d}u_{F}(z)]-\exp (-e^{-z})\right|\le c|F|^{-c}. \end{aligned}$$
(3.1)

We will prove Theorem 3.1 in Sect. 4.

Next we will state the coupling result. For \(n\ge 1\) and \(x_{1},\ldots ,x_{n}\in \mathbb{T }_{N}\) we define the

$$\begin{aligned} \text{ separation}\,s \text{ of} \text{ the} \text{ vertices}\,\, x_{1},\ldots ,x_{n}\in \mathbb{T }_{N}, \text{ by}\; s \!=\! \left\{ \begin{array}{ll} N&\quad \text{ if} \; n\!=\!1,\\ \min _{i\ne j}d_{\infty }(x_{i},x_{j})&\quad \text{ if}\; n\!>\!1. \end{array}\right.\nonumber \\ \end{aligned}$$
(3.2)

For an arbitrarily small \(\varepsilon >0\) which does not depend on \(N\) we define the box

$$\begin{aligned} A=B(0,s^{1-\varepsilon }). \end{aligned}$$
(3.3)

The result will couple the trace of random walk in the boxes \(A+x_{1},\ldots ,A+x_{n}\) with independent random interlacements. Note that thanks to (3.2) and (3.3) the boxes are “far part” (in the sense that the distance between them, of order \(s\), is “much larger” than their radius, which is \(s^{1-\varepsilon }\)) and “at most mesoscopic” (in that their radius, at most \(N^{1-\varepsilon }\), is “much smaller” than the side-length \(N\) of the torus). Given a level \(u>0\) and a \(\delta >0\), with these parameters satisfying appropriate conditions, we will construct independent random interlacements \((\mathcal{I }_{1}^{v})_{v\ge 0},\ldots , (\mathcal{I }_{n}^{v})_{v\ge 0}\) such that (recall the notation \(Y(a,b)\) from above (2.2))

$$\begin{aligned} Q_{1}[\mathcal{I }_{i}^{u(1-\delta )}\cap A\subset (Y(0,uN^{d})-x_{i})\cap A\subset \mathcal{I }_{i}^{u(1+\delta )}]\ge 1-c_{5}ue^{-cs^{c_{5}}}\quad \text{ for} \text{ all} \,\,i. \nonumber \\ \end{aligned}$$
(3.4)

Formally we have

Theorem 3.2

(\(d\ge 3\),\(N\ge 3\)) For \(n\ge 1\) let \(x_{1},\ldots ,x_{n}\in \mathbb{T }_{N}\) be distinct and have separation \(s\) (see (3.2)), and let \(\varepsilon \in (0,1)\). Further assume \(u\ge s^{-c_{5}},1\ge \delta \ge \frac{1}{c_{5}}s^{-c_{5}}\), \(n\le s^{c_{5}}\), where \(c_{5}=c_{5}(\varepsilon )\). We can then construct a space \((\Omega _{1},\mathcal{A }_{1},Q_{1})\) with a random walk \(Y_{\cdot }\) with law \(P\) and independent random interlacements \((\mathcal{I }_{i}^{v})_{v\ge 0},i=1,\ldots ,n\), each with the law of \((\mathcal{I }^{v})_{v\ge 0}\) under \(Q_{0}\), such that (3.4) holds.

Theorem 3.2 will be proved in Sect. 5.

In the proof of Corollary 3.4 we will need the following estimate on the probability of hitting a point, which is a straight-forward consequence of Theorem 3.2, (when \(n=1\)).

Lemma 3.3

(\(d\ge 3\),\(N\ge 3\)) There exists a constant \(c_{6}\) such that if \(N^{-c_{6}}\le u\le N^{c_{6}}\) then for all \(x\in \mathbb{T }_{N}\)

$$\begin{aligned} Q_{0}[0\notin \mathcal{I }^{u}](1-cN^{-c})\le P[x\notin Y(0,uN^{d})]\le Q_{0}[0\notin \mathcal{I }^{u}](1+cN^{-c}). \end{aligned}$$
(3.5)

Proof

We apply Theorem 3.2 with \(n=1\) (so that the separation \(s\) is \(N\)), \(x_{1}=0\), \(\varepsilon =\frac{1}{4}\) (say) and \(\delta =c_{5}^{-1}N^{-c_{5}}\) (where \(c_{5}=c_{5}(\frac{1}{4})\) is the constant from Theorem 3.2). By choosing \(c_{6}\le c_{5}\) we have \(u\ge s^{-c_{5}}\), so that all the conditions of Theorem 3.2 are satisfied and the coupling \(Q_{1}\) of \(Y_{\cdot }\) and random interlacements can be constructed. Therefore it follows from (3.4) that

$$\begin{aligned} Q_{0}[0\notin \mathcal{I }^{u(1+\delta )}]-cue^{-cN^{c_{5}}}\!\le \! P[0\notin Y(0,uN^{d})]\!\le \! Q_{0}[0\notin \mathcal{I }^{u(1-\delta )}]+cue^{-cN^{c_{5}}}.\nonumber \\ \end{aligned}$$
(3.6)

But we also have, if we pick \(c_{6}<c_{5}\), that

$$\begin{aligned} Q_{0}[0\notin \mathcal{I }^{u(1-\delta )}]+cue^{-N^{c_{5}}}&\mathop {=}\limits ^{(2.23)} Q_{0}[0\notin \mathcal{I }^{u}]\left(e^{\frac{\delta u}{g(0)}}+cue^{\frac{u}{g(0)}-cN^{c_{5}}}\right)\\&\le Q_{0}[0\notin \mathcal{I }^{u}](1+cN^{-c}), \end{aligned}$$

since \(cue^{\frac{u}{g(0)}-cN^{c_{5}}}\le cN^{c}e^{-cN^{c}}\) and \(\delta u\le cN^{c_{6}-c_{5}}\le cN^{-c}\) (recall \(u\le N^{c_{6}},\delta =cN^{-c_{5}}\) and \(c_{6}<c_{5}\)). Similarly if \(c_{6}<c_{5}\) we have \(Q_{0}[0\notin \mathcal{I }^{u(1+\delta )}]-cue^{-cN^{c_{5}}}\ge Q_{0}[0\notin \mathcal{I }^{u}](1-cN^{-c})\), so (3.5) follows. \(\square \)

We now state and prove the first corollary. The proof uses Theorem 3.1 and Theorem 3.2 (via Lemma 3.3). Recall the definition of \(\mathcal{N }_{N}^{z}\) from (1.4).

Corollary 3.4

(\(d\ge 3\)) (1.5) holds.

Proof

By Kallenberg’s Theorem (Proposition 3.22, p. 156 of [17]) the result follows from

$$\begin{aligned} \lim _{N\rightarrow \infty }P[\mathcal{N }_{N}^{z}(I)=0]&= \exp (-\lambda (I)e^{-z})\quad \text{ and} \end{aligned}$$
(3.7)
$$\begin{aligned} \lim _{N\rightarrow \infty }E[\mathcal{N }_{N}^{z}(I)]&= \lambda (I)e^{-z}, \end{aligned}$$
(3.8)

for all \(I\) in the collection \(\mathcal{J }=\left\{ I:I \text{ a} \text{ finite} \text{ union} \text{ of} \text{ products} \text{ of} \text{ open} \text{ intervals} \text{ in}\right.\) \(\left.(\mathbb{R }/\mathbb{Z })^{d},\lambda (I)>0\right\} \). Note that

$$\begin{aligned} \lim _{N\rightarrow \infty } \frac{|NI\cap \mathbb{T }_{N}|}{|\mathbb{T }_{N}|} =\lambda (I)\quad \text{ for} \text{ all} \,\,I\in \mathcal{J }\!, \end{aligned}$$
(3.9)

since \(|NI\,\cap \,\mathbb{T }_{N}|/|\mathbb{T }_{N}|\) is the mass assigned to the set \(I\) by the measure \(\sum _{x\in \mathbb{T }_{N}}N^{-d}\) \(\delta _{x/N}\) on \((\mathbb{R }/\mathbb{Z })^{d}\), which converges weakly to \(\lambda \) as \(N\rightarrow \infty \) (note that \(I\) is a continuity set of \(\lambda \), and see the Portmanteau theorem, Proposition 5.1, p. 9, [18]).

Fix an \(I\in \mathcal{J }\). To check (3.7) note that \(\{\mathcal{N }_{N}^{z}(I)=0\}=\{C_{NI\cap \mathbb{T }_{N}}\le N^{d}u_{\mathbb{T }_{N}}(z)\}\) (see (1.4), (2.25) and (2.26)). Since \(u_{\mathbb{T }_{N}}(z)=u_{NI\cap \mathbb{T }_{N}}(z-\log \frac{|NI\cap \mathbb{T }_{N}|}{N^{d}})\) (see (2.26)) and \(|NI\cap \mathbb{T }_{N}|\rightarrow \infty \) as \(N\rightarrow \infty \) (recall \(\lambda (I)>0\)) we get from (3.1) that

$$\begin{aligned} \Big | P[\mathcal{N }_{N}^{z}(I)=0]-\exp \Big (-e^{-(z-\log \frac{|NI\cap \mathbb{T }_{N}|}{N^{d}})}\Big )\Big |\rightarrow 0 \text{ as} \,\,N\rightarrow \infty . \end{aligned}$$

But by (3.9) we have \(\exp (-e^{-(z-\log \frac{|NI\cap \mathbb{T }_{N}|}{N^{d}})})\rightarrow \exp (-\lambda (I)e^{-z})\) as \(N\rightarrow \infty \) so (3.7) follows.

To check (3.8) note that by (1.4) and (2.26)

$$\begin{aligned} E[\mathcal{N }_{N}^{z}(I)]=|NI\cap \mathbb{T }_{N}|P[0\notin Y(0,N^{d}u_{\mathbb{T }_{N}}(z))]. \end{aligned}$$
(3.10)

Using (3.5) with \(u=u_{\mathbb{T }_{N}}(z)\) (\(\in [N^{-c_{6}},N^{c_{6}}]\) for \(N\ge c(z)\)) and \(Q_{0}[0\notin \mathcal{I }^{u_{\mathbb{T }_{N}}(z)}]=\frac{e^{-z}}{N^{d}}\) (see (2.23)) we get from (3.10) that

$$\begin{aligned} e^{-z}\lim _{N\rightarrow \infty }\frac{|NI\cap \mathbb{T }_{N}|}{N^{d}}(1-cN^{-c})&\le \lim _{N\rightarrow \infty }E[\mathcal{N }_{N}^{z}(I)]\\&\le e^{-z}\lim _{N\rightarrow \infty }\frac{|NI\cap \mathbb{T }_{N}|}{N^{d}}(1+cN^{-c}). \end{aligned}$$

Now using (3.9) we find that (3.8) holds. Thus the proof of (1.5) is complete. \({\square }\)

See Remark 9.4 for a potential generalisation of Corollary 3.4. We now state the second corollary, which is a consequence of the first. The proof follows those of Corollary 2.3 of [6] and Proposition 2.8 of [5], so we omit some details.

Corollary 3.5

(\(d\ge 3\)) Let \(Z_{1},\ldots ,Z_{N^{d}}\), be the vertices of \(\mathbb{T }_{N}\) ordered by entrance time,

$$\begin{aligned} \text{ so} \text{ that}\,\, C_{\mathbb{T }_{N}}=H_{Z_{1}}>H_{Z_{2}}>\cdots >H_{Z_{N^{d}}}. \end{aligned}$$

Then for any \(k\ge 2\) and \(0<\delta \le \frac{1}{4}\)

$$\begin{aligned} \lim _{\delta \rightarrow 0}\limsup _{N\rightarrow \infty }P[\exists 1\le i<j\le k \text{ with} \,\,d(Z_{i},Z_{j})\le \delta N]=0,\quad \end{aligned}$$
(3.11)

or in other words “the last \(k\) vertices to be hit are far apart, at distance of order \(N\)”.

Proof

Note that for any \(N,\delta ,k\) and \(z\in \mathbb{R }\) the probability in (3.11) is bounded above by

$$\begin{aligned} P[\exists x,y\in \text{ Supp}\,\,(\mathcal{N }_{N}^{z}) \text{ with}\,\,d(x,y)\le \delta N]+P[\mathcal{N }_{N}^{z}((\mathbb{R }/\mathbb{Z })^{d})<k]. \end{aligned}$$
(3.12)

Furthermore the first probability in (3.12) is bounded above by \(E[\mathcal{N }_{N}^{z}\otimes \mathcal{N }_{N}^{z}(g)-\) \(\mathcal{N }_{N}^{z}((\mathbb{R }/\mathbb{Z })^{d})]\), where \(g\) is the function \((x,y)\rightarrow f(x-y)\), for \(f:(\mathbb{R }/\mathbb{Z })^{d}\rightarrow [0,1]\) continuous with \(f(x)=1\) when \(d(0,x)\le \delta \) and \(f(x)=0\) if \(d(0,x)\ge 2\delta \). Thus using (1.5), one sees that the limsup in (3.11) is bounded above by

$$\begin{aligned} \mathbb E [\mathcal{N }^{z}\otimes \mathcal{N }^{z}(g) -\mathcal{N }^{z}((\mathbb{R }/\mathbb{Z })^{d})]+\mathbb P [\mathcal{N }^{z}((\mathbb{R }/\mathbb{Z })^{d})<k]. \end{aligned}$$
(3.13)

Using a calculation involving Palm measures (we omit the details, cf. (2.19) of [6]) the first term in (3.13) can be shown to be equal to \((e^{-z}\int _{(\mathbb{R }/\mathbb{Z })^{d}}f(x)dx)^{2}.\) Since \(\int _{(\mathbb{R }/\mathbb{Z })^{d}}f(x)dx\le c\delta ^{d}\) one thus finds that for all \(z\in \mathbb{R }\) the left-hand side of (3.11) is bounded above by \(\mathbb P [\mathcal{N }^{z}((\mathbb{R }/\mathbb{Z })^{d})<k]\). But \(\mathbb P [\mathcal{N }^{z}((\mathbb{R }/\mathbb{Z })^{d})<k] \rightarrow 0\) as \(z\rightarrow -\infty \), so (3.11) follows. \(\square \)

4 Coupling gives Gumbel fluctuations

In this section we use the coupling result Theorem 3.2 to prove Theorem 3.1, i.e. to prove that cover times in \(\mathbb{T }_{N}\) have Gumbel fluctuations. Essentially speaking we combine the method of the proofs of Theorem 0.1 in [5] and Theorem 0.1 in [6] with the coupling Theorem 3.2.

The first step is to prove that if \(F\subset \mathbb{T }_{N}\) is smaller than \(N^{c}\) for some small exponent \(c\) (but still “large”) and consists of isolated vertices separated by a distance at least \(|F|^{c}\), then the cover time \(C_{F}\) is approximately distributed as \(N^{d}\{g(0)\log |F|+G\}\), where \(G\) is a standard Gumbel random variable. We prove this (uniformly in the starting vertex of the random walk) in Lemma 4.1. To prove Lemma 4.1 we use a mixing argument to reduce to the case when random walk starts from the uniform distribution. This case is then proven in Lemma 4.4 using the fact that for vertices that are far apart, entrance times are approximately independent (which is proven in Lemma 4.3, using the coupling result Theorem 3.2), and from this a simple calculation will show that the cover time (which is then the maximum of almost i.i.d. random variables) has distribution close to the Gumbel distribution.

To get the Gumbel limit result for arbitrary \(F\subset \mathbb{T }_{N}\) (e.g. for \(F=\mathbb{T }_{N}\), where the entrance times of close vertices are far from being independent) we consider the set of \((1-\rho )-\) late points (using the terminology from [10]), which is the set of vertices that are not yet hit at a \(1-\rho \) fraction of the typical time it takes to cover \(F\) , or more formally

$$\begin{aligned} F_{\rho }=F\backslash Y(0,t(\rho )), \text{ where} \,t(\rho )=N^{d}(1-\rho )g(0)\log |F| \text{ and} \,0<\rho <1.\quad \end{aligned}$$
(4.1)

It turns out, roughly speaking, that if we use a fixed but small \(\rho \) then with high probability this set consists of isolated vertices that are at distance at least \(|F|^{c}\) from one another, and that \(|F_{\rho }|\) concentrates around \(E[|F_{\rho }|]\), which we will see, is close to \(|F|^{\rho }\). This is done in Lemma 4.2 by relating the probability of two vertices not being hit by random walk to the probability of two vertices not being in a random interlacement using the coupling (in Lemma 4.5), and using Lemma 2.3.

This will imply that with high probability the set \(F_{\rho }\) fulfils the conditions of Lemma 4.1 (i.e. is “smaller than \(N^{c}\)” and separated), so that, using the Markov property, we will be able to show that conditioned on \(F_{\rho }\) the time \(C_{F_{\rho }}\circ \,\theta _{t(\rho )}\) has distribution close to \(N^{d}g(0)\{\log |F_{\rho }|+G\}\approx N^{d}g(0)\{\rho \log |F|+G\}\), where \(G\) is a standard Gumbel random variable. Since, on the event that \(F_{\rho }\) is non-empty (which has probability close to one), \(C_{F}=t(\rho )+C_{F_{\rho }}\circ \theta _{t(\rho )}\) we will be able to conclude that \(C_{F}\) has distribution close to \(N^{d}g(0)\{\log |F|+G\}\), which is the claim of Theorem 3.1.

We start by stating Lemma 4.1, which says that the cover time has law close to the Gumbel distribution for “well-separated” sets that are not too large.

Lemma 4.1

(\(d\ge 3\), \(N\ge 3\)) There are constants \(c_7\) and \(c_8\) such that if \(F\subset \mathbb{T }_{N}\) satisfies \(2\le |F|\le N^{c_7}\) and \(d_{\infty }(x,y)\ge |F|^{c_8}\) for all \(x,y\in F,x\ne y\), then

$$\begin{aligned} \sup _{z\in \mathbb{R },x\in \mathbb{T }_{N}}\big |P_{x}[F\subset Y(0,u_{F}(z)N^{d})]-\exp (-e^{-z})\big |\le c|F|^{-c}. \end{aligned}$$
(4.2)

(Recall that we have defined the range \(Y(0,u_{F}(z)N^{d})\) such that it is empty if \(u_{F}(z)N^{d}\!<\!0\).) We will prove Lemma 4.1 after proving Theorem 3.1. To prove Theorem 3.1 we must prove something like (4.2) for arbitrary \(F\subset \mathbb{T }_{N}\). We do so by studying the set of late points \(F_{\rho }\) (recall (4.1)). We will show that “with high probability” it belongs to the collection \(\mathcal{G }\) of “good subsets of \(F\)”, where

$$\begin{aligned} \mathcal{G }=\{F^{\prime }\subset F:\left||F^{\prime }|-|F|^{\rho }\right|\le |F|^{\frac{2}{3}\rho }\quad \text{ and} \,\,\inf _{x,y\in F^{\prime },x\ne y}d_{\infty }(x,y)\ge |F|^{\frac{1}{2d}}\},\qquad \end{aligned}$$
(4.3)

or in other words that it has cardinality close to \(|F|^{\rho }\) and is “well-separated”. Formally:

Lemma 4.2

(\(d\ge 3\),\(N\ge 3\)) There exists a constant \(c_9\) such that for \(0 < \rho \le c_9\) and \(F\subset \mathbb{T }_{N}\) with \(|F|\ge c(\rho )\)

$$\begin{aligned} P[F_{\rho }\notin \mathcal{G }]\le c|F|^{-c(\rho )}. \end{aligned}$$
(4.4)

Before proving Lemma 4.2 we use it to prove Theorem 3.1. Recall that \(\{C_{F}\le t\}\mathop {=}\limits ^{(2.25)}\{F\subset Y(0,t)\}\).

Proof of Theorem 3.1

By (4.4) we have for \(0<\rho \le c_9\) and \(|F|\ge c(\rho )\)

$$\begin{aligned} \left|P[C_{F}\le u_{F}(z)N^{d}]-P[F\subset Y(0,u_{F}(z)N^{d}),F_{\rho }\in \mathcal{G }]\right|\le c|F|^{-c(\rho )}. \end{aligned}$$
(4.5)

Also by the Markov property, if \(|F|\ge c(\rho )\) (so that \(\emptyset \notin \mathcal{G }\)),

$$\begin{aligned}&P[F\subset Y(0,u_{F}(z)N^{d}),F_{\rho }\in \mathcal{G }]\nonumber \\&\quad \mathop {=}\limits ^{(3.1)}\underset{x\in \mathbb{T }_{N},F^{\prime }\in \mathcal{G }}{\sum } P[F_{\rho }=F^{\prime },Y_{t(\rho )}=x] P_{x}[F^{\prime }\subset Y(0,u_{F}(z)N^{d}-t(\rho ))].\quad \end{aligned}$$
(4.6)

Set \(h=\log \frac{|F^{\prime }|}{|F|^{\rho }}\) so that \(P_{x}[F^{\prime }\subset Y(0,u_{F}(z)N^{d}-t(\rho ))]=P_{x}[F^{\prime }\subset Y(0,u_{F^{\prime }}(z-h)N^{d})]\) (see (2.26)).

Also fix \(\rho =c \le c_9\) small enough so that \(2d\rho \le c_7\) and \(\frac{1}{4d\rho }\ge c_8\). Then (4.5) holds when \(|F|\ge c\). Furthermore Lemma 4.1 applies to all \(F^{\prime }\in \mathcal{G }\) when \(|F|\ge c\), since by (4.3) every \(F^{\prime }\in \mathcal{G }\) satisfies \(|F^{\prime }|\le 2|F|^{\rho }\le |F|^{2\rho }\), if \(|F|\ge c=c(\rho )\), so that \(2d\rho \le c_7\) implies \(|F^{\prime }|\le |F|^{2\rho }\le N^{2d\rho }\le N^{c_7}\) and \(\frac{1}{4d\rho }\ge c_8\) implies \(\inf _{x,y\in F^{\prime },x\ne y}d_{\infty }(x,y)\ge |F|^{\frac{1}{2d}}\ge |F^{\prime }|^ {\frac{1}{4d\rho }}\ge |F^{\prime }|^{c_8}\). So applying Lemma 4.1 with \(F^{\prime }\) in place of \(F\), we get that for all \(|F|\ge c\), \(x\in \mathbb{T }_{N}\) and \(F^{\prime }\in \mathcal{G }\) we have

$$\begin{aligned} \big |P_{x}[F^{\prime }\subset Y(0,u_{F}(z)N^{d}-t(\rho ))]-\exp (-e^{-(z-h)})\big |\le c|F|^{-c}. \end{aligned}$$
(4.7)

But it is elementary that

$$\begin{aligned} \big |\exp (-e^{-(z-h)})-\exp (-e^{-z})\big |\le c|h|\quad \text{ for} \text{ all} \,\,z,h\in \mathbb{R }, \end{aligned}$$
(4.8)

and for the present \(h\) we have \(|h|\le \max (\log (1+|F|^{-\frac{1}{3}\rho }),-\log (1-|F|^{-\frac{1}{3}\rho }))\le c|F|^{-\frac{1}{3}\rho }\) (see (4.3)) provided \(|F|\!\ge \! c\), so in fact \(|P_{x}[F^{\prime }\subset Y(0,u_{F}(z)N^{d}-t(\rho ))]-\exp (-e^{-z})|\le c|F|^{-c}\) for all \(F^{\prime }\in \mathcal{G }\). Thus (4.6) implies that

$$\begin{aligned} |P[F\subset Y(0,u_{F}(z)N^{d}),F_{\rho }\in \mathcal{G }]-\exp (-e^{-z})P[F_{\rho }\in \mathcal{G }]|\le c|F|^{-c}. \end{aligned}$$

Combining this with (4.5) and one more application of (4.4), the claim (3.1) follows for \(|F|\ge c\) (recall that \(\rho \) is itself a constant). But by adjusting constants (3.1) holds for all \(F\), so the proof of Theorem 3.1 is complete. \(\square \)

We now turn to the proof of Lemma 4.1. We will need the following lemma, which says that “distant vertices have almost independent entrance times”.

Lemma 4.3

(\(d\ge 3,\,N\ge 3\)) Let \(x_{1},\ldots ,x_{n}\in \mathbb{T }_{N}\) and let \(s\) be the separation defined as in (3.2). There is a constant \(c_{10}\) such that if \(n\le s^{c_{10}}\) and \(u\in [1,s^{c_{10}}]\) we have

$$\begin{aligned} \left(Q_{0}[0\in \mathcal{I }^{u}]\right)^{n}-cs^{-c}\le P[x_{1},\ldots ,x_{n}\in Y(0,uN^{d})]\le \left(Q_{0}[0\in \mathcal{I }^{u}]\right)^{n}+cs^{-c}.\nonumber \\ \end{aligned}$$
(4.9)

Proof

We apply Theorem 3.2 with \(\varepsilon =\frac{1}{2}\) (say). We pick \(c_{10}\le \frac{c_{5}}{3}\), where \(c_{5}=c_{5}(\frac{1}{2})\) is the constant from Theorem 3.2, so that that \(n\le s^{c_{5}}\). Thus if \(\delta =c_{5}^{-1}s^{-c_{5}}\) and \(s\ge c\) (so that \(u\ge 1\ge s^{-c_{5}}\)) all the conditions of Theorem 3.2 are satisfied. By (3.4) we have for \(s\ge c\)

$$\begin{aligned} Q_{1}[0\in \mathcal{I }_{i}^{u(1-\delta )}\forall i]-cs^{-c}&\le P[x_{1},\ldots ,x_{n}\in Y(0,uN^{d})]\nonumber \\&\le Q_{1}[0\in \mathcal{I }_{i}^{u(1+\delta )}\forall i]+cs^{-c}, \end{aligned}$$
(4.10)

where we also use that \(cnue^{-cs^{c_{5}}}\le cs^{-c}\), since \(u,n\le s^{c}\). By the independence of \(\mathcal{I }_{1},..,\mathcal{I }_{n},\)

$$\begin{aligned} Q_{1}[0\in \mathcal{I }_{i}^{u(1+\delta )}\forall i] = \big (Q_{0}[0\in \mathcal{I }^{u(1+\delta )}]\big )^{n}\\ \!&\mathop {=}\limits ^{(2.23)} \left(Q_{0}[0\in \mathcal{I }^{u}]\right)^{n}\bigg (\frac{1-e^{-\frac{u(1+\delta )}{g(0)}}}{1-e^{-\frac{u}{g(0)}}}\bigg )^{n}\\&\le (Q_{0}[0\in \mathcal{I }^{u}])^{n}(1+cs^{-c}), \end{aligned}$$

where we use that

$$\begin{aligned} \frac{1-e^{-\frac{u(1+\delta )}{g(0)}}}{1-e^{-\frac{u}{g(0)}}} =1+e^{-\frac{u}{g(0)}}\;\frac{1-e^{-\frac{u\delta }{g(0)}}}{1-e^{-\frac{u}{g(0)}}}\le 1+c(1-e^{-\frac{u\delta }{g(0)}})\le 1+cs^{-2c_{10}} \end{aligned}$$

(note \(u\ge 1\) and \(u\delta \le cs^{c_{10}-c_{5}}=cs^{-2c_{10}}\)) and \((1+cs^{-2c_{10}})^{n}\le 1+cs^{-c}\) (note \(n\le s^{c_{10}}\)). Similarly

$$\begin{aligned} Q_{1}[0\in \mathcal{I }_{i}^{u(1-\delta )}\forall i]\ge \left(Q_{0}[0\in \mathcal{I }^{u}]\right)^{n}(1-cs^{-c})\quad \text{ if} \,\,s\ge c. \end{aligned}$$

Applying these inequalities to the right- and left-hand sides of (4.10) yields (4.9) for \(s\ge c\). But by adjusting constants in (4.9) the same holds for all \(s\ge 1\).\(\square \)

We will now use Lemma 4.3 to show (4.11), which is almost like our goal (4.2), but has random walk starting from the uniform distribution.

Lemma 4.4

(\(d\ge 3\),\(N\ge 3\)) There are constants \(c_{11}\) and \(c_{12}\), such that for \(F\subset \mathbb{T }_{N}\) satisfying \(|F|\le N^{c_{11}}\) and \(d_{\infty }(x,y)\ge |F|^{c_{12}}\) for all \(x,y\in F,x\ne y\), we have

$$\begin{aligned} \sup _{z\in \mathbb{R }}\big |P[F\subset Y(0,u_{F}(z)N^{d})]-\exp (-e^{-z})\big |\le c|F|^{-c}. \end{aligned}$$
(4.11)

Proof

The claim (4.11) follows from

$$\begin{aligned} \sup _{z\in [-\frac{\log |F|}{4},\frac{\log |F|}{4}]}\big |P[F\subset Y(0,u_{F}(z)N^{d})]-\exp (-e^{-z})\big |\le c|F|^{-c}, \end{aligned}$$
(4.12)

since \(P[F\subset Y(0,u_{F}(z))]\) is monotone in \(z\), \(\exp (-e^{-z})\le c|F|^{-c}\) when \(z\le -\frac{\log |F|}{4}\) and \(\exp (-e^{-z})\ge 1-c|F|^{-c}\) when \(z\ge \frac{\log |F|}{4}\). For the rest of the proof we therefore assume that \(z\in [-\frac{\log |F|}{4},\frac{\log |F|}{4}]\).

First, assume also that \(|F| \ge c\). Let \(F=\{x_{1},\ldots ,x_{n}\}\) so that \(n=|F|\) and the separation \(s\) satisfies \(s\ge |F|^{c_{12}}\). Also let \(u=u_{F}(z)\). To be able to apply Lemma 4.3 we pick \(c_{12}\) large enough so that \(c_{12}c_{10}\ge 1\), and thus \(|F|\le |F|^{c_{12} c_{10}}\le s^{c_{10}}\), which implies \(n\le s^{c_{10}}\) and \(1\le g(0)\frac{3}{4}\log |F|\le u\le g(0)\frac{5}{4}\log |F|\le |F|\le s^{c_{10}}\) since \(z\in [-\frac{1}{4}\log |F|,\frac{1}{4}\log |F|]\) (recall that we assumed \(|F|\ge c\)). Thus by (4.9)

$$\begin{aligned} \big |P[F\subset Y(0,u_{F}(z)N^{d})]-\big (Q_{0}[0\in \mathcal{I }^{u_{F}(z)}]\big )^{|F|}\big |\le s^{-c}\le c|F|^{-c}\quad \text{ for} \,\,|F|\ge c.\nonumber \\ \end{aligned}$$
(4.13)

We have \((Q_{0}[0\in \mathcal{I }^{u_{F}(z)}])^{|F|}=(1-\frac{e^{-z}}{|F|})^{|F|}\) by (2.23). But it is elementary that

$$\begin{aligned} \exp (-e^{-z})-c|F|^{-c}\!\le \! \Big (1- \frac{e^{-z}}{|F|}\Big )^{|F|}\!\le \exp (-e^{-z}) \text{ for} \,\,z\!\ge - \frac{1}{4} \log |F| \text{ and} \,\,|F|\!\ge \! c, \end{aligned}$$

(since \(-\frac{e^{-z}}{|F|}-c|F|^{-\frac{1}{2}}\le \log (1-\frac{e^{-z}}{|F|})\le -\frac{e^{-z}}{|F|}\) using the Taylor expansion of \(\log (1-x)\) and \(e^{-z}\le |F|^{\frac{1}{4}}\)). Thus (4.12) follows for \(|F|\ge c\). Finally by adjusting constants (4.12) holds for any \(F\), so the proof of the lemma is complete. \(\square \)

Now we use (4.11) to get the desired result (4.2) (where the random walk can start from any vertex). To do this we roughly speaking show that in time \(|F|N^{2}\) the random walk will, with high probability, hit only the vertex in \(F\) closest to the starting point, if it hits any vertices at all. But it will turn out that in time \(|F|N^{2}\) the random walk mixes so that what happens after this time is governed by (4.11), and from this (4.2) will follow.

Proof of Lemma 4.1

Fix \(x\in \mathbb{T }_{N}\) and \(z\in \mathbb{R }\). By (2.4) we can (on an extended probability space \((\Omega ,\mathcal{A },Q)\)) construct a coupling of \((Y_{t})_{t\ge 0}\) with law \(P_{x}\) and a process \((Z_{t})_{t\ge 0}\) with law \(P\) such that \((Y_{N^{2}|F|+t})_{t\ge 0}\) coincides with \((Z_{t})_{t\ge 0}\) with probability at least \(1-ce^{-c|F|}\). Then if \(z_{-}=z-\frac{1}{|F|}\) (so that \(u_{F}(z_{-})N^{d}\mathop {\le }\limits ^{(2.26),g(0)\ge 1} u_F(z)N^{d}-\frac{N^{d}}{|F|}\overset{|F|\le N^{c_7}}{\le }u_{F}(z)N^{d}\,-\,|F|N^{2}\), provided \(c_7\) is chosen small enough) we have

$$\begin{aligned}&Q[F\subset Z(0,u_{F}(z_{-})N^{d})] - \;ce^{-c|F|}\le Q[F\subset Y(0,u_{F}(z)N^{d})]\nonumber \\&\quad \le Q[F\subset Y(0,|F|N^{2})\cup Z(0,u_{F}(z)N^{d})]+ce^{-c|F|}, \end{aligned}$$
(4.14)

Now (possibly making \(c_7\) smaller and \(c_8\) larger) we see that Lemma 4.4 applies to the left-hand side of (4.14) (recall \(Z_{\cdot }\) has law \(P\)) and we get \(\exp (-e^{-z_{-}})-c|F|^{-c}\le Q[F\subset Y(0,u_{F}(z)N^{d})]\), which together with (4.8) and \(h=\frac{1}{|F|}\) implies

$$\begin{aligned} \exp (-e^{-z})-c|F|^{-c}\le Q[F\subset Y(0,u_{F}(z)N^{d})] = P_{x}[F\subset Y(0,u_{F}(z)N^{d})].\nonumber \\ \end{aligned}$$
(4.15)

It remains to bound the right-hand side of (4.14) from above. Let \(y\) denote a vertex of \(F\) of minimal distance from \(x\) and let \(F^{\prime }=F\backslash \{y\}\). We then have

$$\begin{aligned}&Q[F\subset Y(0,|F|N^{2})\cup Z(0,u_{F}(z)N^{d})]\\&\quad \le Q[F^{\prime }\subset Y(0,|F|N^{2})\cup Z(0,u_{F}(z)N^{d})],\\&\quad \le P_{x}[H_{F^{\prime }}<|F|N^{2}]+Q[F^{\prime }\subset Z(0,u_{F}(z)N^{d})]. \end{aligned}$$

Now \(P_{x}[H_{F^{\prime }}<|F|N^{2}]\le \sum _{v\in F^{\prime }}P_{x}[H_{v}<N^{2+\frac{1}{10}}]\) (possibly decreasing \(c_7\) so that \(|F|\le N^{\frac{1}{10}}\)). Now by our assumption on \(F\) and choice of \(y\) we have \(d(x,v)\ge \frac{1}{2}|F|^{c_8}\) for all \(v\in F^{\prime }\). Therefore using (2.11) with \(\lambda =\frac{1}{10}\), \(r_{1}=1\) and \(r_{2}=\frac{1}{2}|F|^{c_8}\) (possibly decreasing \(c_7\) even more so that \(r_{2}\le |F|^{c_8}\le N^{c_7c_8}\le N^{1-3\lambda }\)) we get \(P_{x}[H_{v}<N^{2+\frac{1}{10}}]\le c|F|^{-c_8}\) for all \(v\in F^{\prime }\) and thus \(P_{x}[H_{F^{\prime }}<|F|N^{2}]\le c|F|^{1-c_8}\le c|F|^{-c}\), since we may increase \(c_8\) so that \(c_8>1\). Letting \(z_{+}=z+\frac{2}{|F|}\) we have

$$\begin{aligned} u_{F}(z)\mathop {=}\limits ^{(1.26)}u_{F^{\prime }}\Big (z+\log \frac{|F|}{|F^{\prime }|}\Big ) \overset{|F^{\prime }|=|F|-1,|F|\ge 2}{\le }u_{F^{\prime }}(z_{+}), \end{aligned}$$

so that

$$\begin{aligned} Q[F\subset Y(0,|F|N^{2})\cup Z(0,u_{F}(z)N^{d})]&\le \;Q[F^{\prime }\subset Z(0,u_{F^{\prime }}(z_{+})N^{d})]+c|F|^{-c}\nonumber \\&= \; P[F^{\prime }\subset Y(0,u_{F^{\prime }}(z_{+})N^{d})]+c|F|^{-c}\nonumber \\&\le \; \exp (-e^{-z})+c|F|^{-c}, \end{aligned}$$
(4.16)

where the last inequality follows from Lemma 4.4 and (4.8) (similarly to above (4.15)). Together with (4.15) and (4.14) this implies (4.2). \(\square \)

It still remains to prove Lemma 4.2, the other ingredient in the proof of (3.1). For the proof we will use the following bounds on the probability of not hitting two points, which is a consequence of the coupling.

Lemma 4.5

(\(d\ge 3,\,N\ge 3\)) There exists a constant \(c_{13}\) such that for all \(x,y\in \mathbb{T }_{N}\) we have, letting \(v=d_{\infty }(x,y)\),

$$\begin{aligned}&\!\!\!P[x,y\notin Y(0,uN^{d})]\!\le \!(1+cv^{-c_{13}})(Q_{0}[0\notin \mathcal{I }^{u}])^{2} \text{ if} \,\,u\in [1,v^{c_{13}}],\end{aligned}$$
(4.17)
$$\begin{aligned}&\!\!\!P[x,y\!\notin \! Y(0,uN^{d})]\!\le \!(1\!+\!cN^{-c_{13}})Q_{0}[0,x\!-\!y\!\notin \!\mathcal{I }^{u}] \text{ if} \,\,v \!\le \! N^{\frac{1}{2}},u\!\in \![1,N^{c_{13}}].\qquad \quad \qquad \end{aligned}$$
(4.18)

Proof

We start with (4.17). Let \(n=2,x_{1}=x,x_{2}=y\) (so that the separation \(s\) is \(v\)) and \(\varepsilon =\frac{1}{2}\) (say). We have \(u\ge s^{-c_{5}}\) and thus letting \(\delta =c_{5}^{-1}s^{-c_{5}}\) it follows from Theorem 3.2 (see (3.4)) that, choosing \(c_{13}<\frac{1}{2}c_{5}\) so that \(u\delta \le cs^{c_{13}}s^{-2c_{13}}=cs^{-c_{13}}\),

$$\begin{aligned} P[x,y\notin Y(0,uN^{d})] \!&\le \!\!\left(Q_{0}[0\notin \mathcal{I }^{u(1-\delta )}]\right)^{2}+cue^{-cs^{c_{5}}}\nonumber \\&\mathop {=}\limits ^{(2.23)}&\!\! \left(Q_{0}[0\notin \mathcal{I }^{u}]\right)^{2}e^{2\frac{u\delta }{g(0)}}+cue^{-cs^{c_{5}}}\\ \!&\le \!\!\left(Q_{0}[0\notin \mathcal{I }^{u}]\right)^{2}(1+cs^{-c_{13}})+cue^{-cs^{c_{5}}}.\nonumber \end{aligned}$$
(4.19)

But if \(c_{13}\) is chosen small enough \(cue^{-cs^{c_{5}}}\!\le \! ce^{-cs^{c_{5}}}\!\le \! cs^{-c_{13}}e^{-cs^{c}}\!\le \! cs^{-c_{13}}e^{-\frac{2u}{g(0)}} = cs^{-c_{13}}(Q_{0}[0\notin \mathcal{I }^{u}])^{2}\), so (4.17) follows.

To prove (4.18) we let \(x_{1}=x\), \(n=1\) (so that the separation \(s\) is \(N\)). We further let \(\varepsilon =\frac{1}{2}\) so that the box \(A+x_{1}=B(x_{1},s^{1-\varepsilon })\) contains \(y\), and note that \(u\ge s^{-c_{5}}=N^{-c_{5}}\). Thus letting \(\delta =c_{5}^{-1}s^{-c_{5}}=c_{5}^{-1}N^{-c_{5}}\) it follows from Theorem 3.2 that

$$\begin{aligned} P[x,y\notin Y(0,uN^{d})]\le Q_{0}[0,y-x\notin \mathcal{I }^{u(1-\delta )}]+cue^{-cN^{c_{5}}}. \end{aligned}$$

Now similarly to above we find that the right-hand side is bounded above by \((1+cN^{-c_{13}})Q_{0}[0,y-x\notin \mathcal{I }^{u}]\) (provided \(c_{13}\) is chosen small enough), so (4.18) follows. \({\square }\)

We are now in a position to prove Lemma 4.2. We will show that \(E[|F_{\rho }|]\) is close to \(|F|^{\rho }\), so that proving that the probability of \(F_{\rho }\notin \mathcal{G }\) is small reduces (via Chebyshev’s inequality) to bounding the variance of \(\text{ Var}\,\,[|F_{\rho }|]\) from above and bounding the probability that \(\inf _{x,y\in F_{\rho },x\ne y}\)   \(d_{\infty }(x,y)\) is small from above (recall (4.3) and (4.4)). But both \(\text{ Var}\,[|F_{\rho }|]\) and the probability that \(\inf _{x,y\in F_{\rho },x\ne y}d_{\infty }(x,y)\) is small can be bounded above in terms of sums, over pairs \(x,y\) of vertices, of the probability \(P[x,y\notin Y(0,t(\rho ))]\), and these sums can be controlled by Lemma 2.3, via (4.17) and (4.18).

Proof of Lemma 4.2

Let

$$\begin{aligned} u=g(0)(1-\rho )\log |F| \text{ so} \text{ that} \,\, t(\rho )=uN^{d}\!, \end{aligned}$$
(4.20)

and record for the sequel that

$$\begin{aligned} Q_{0}[x\notin \mathcal{I }^{u}] \mathop {=}\limits ^{(2.23),(4.20)} |F|^{\rho -1}\quad \text{ for} \text{ all} \,\,x\in \mathbb{Z }^{d}. \end{aligned}$$
(4.21)

By summing over \(x\in F\) in (3.5) (note that \(|F|\le N^{d}\) so that we have

$$\begin{aligned} 1\overset{|F|\ge c(\rho )}{\le }u\le c\log N\overset{N\ge |F|^{1/d}\ge c}{\le }N^{c_{6}} \end{aligned}$$
(4.22)

by (4.20)) and using \(|F|Q_{0}[0\notin \mathcal{I }^{u}]\overset{(4.21)}{=}|F|^{\rho }\) we get

$$\begin{aligned} (1-cN^{-c})|F|^{\rho }\le E[|F_{\rho }|]\le (1+cN^{-c})|F|^{\rho }. \end{aligned}$$
(4.23)

Therefore \(\left||F_{\rho }|-|F|^{\rho }\right|>|F|^{\frac{2}{3}\rho }\) implies

$$\begin{aligned} ||F_{\rho }|-E[|F_{\rho }|]|>|F|^{\frac{2}{3}\rho }-cN^{-c}|F|^{\rho }\overset{|F|\le N^{d}}{\ge }|F|^{\frac{2}{3}\rho }-c|F|^{\rho -c}\overset{\rho \le c,|F|\ge c}{\ge } \frac{|F|^{\frac{2}{3}\rho }}{2}. \end{aligned}$$

Thus the Chebyshev inequality gives

$$\begin{aligned} P\Big [\Bigl ||F_{\rho }|-|F|^{\rho }]\Big |>|F|^{\frac{2}{3}\rho }\Big ]\le P\Big [\Bigl ||F_{\rho }|-E[|F_{\rho }|]\Big |> \frac{1}{2} |F|^{\frac{2}{3}\rho }\Big ]\le 4 \frac{\text{ Var}[|F_{\rho }|]}{|F|^{\frac{4}{3}\rho }}. \end{aligned}$$

Note that \(\text{ Var}\,[|F_{\rho }|]=\sum _{x,y\in F}q_{x,y}\le E[|F_{\rho }|]+\sum _{x\ne y\in F}q_{x,y}\), where \(q_{x,y}=P[x,y\notin Y(0,uN^{d})]-P[x\notin Y(0,uN^{d})]P[y\notin Y(0,uN^{d})]\). Therefore using (4.23), and splitting the sum between “far and close pairs of vertices”, we get

$$\begin{aligned} P\Big [\Big ||F_{\rho }|-|F^{\rho }|\Big |>|F|^{\frac{2}{3}\rho }\Big ]&\le c|F|^{-\frac{1}{3}\rho }+c\sum \limits _{\{x,y\}\in V}P[x,y\notin Y(0,uN^{d})]\nonumber \\&\quad +\ c\sum \limits _{\{x,y\}\in W}q_{x,y}, \end{aligned}$$
(4.24)

where \(V=\{\{x,y\}\subset F:0<d_{\infty }(x,y)\le |F|^{\frac{1}{2d}}\}\) and \(W=\{\{x,y\}\subset F:d_{\infty }(x,y)>|F|^{\frac{1}{2d}}\}\). Furthermore note that

$$\begin{aligned} P\Big [\inf _{x,y\in F_{\rho },x\ne y}d_{\infty }(x,y)<|F|^{\frac{1}{2d}}\Big ]\le \sum \limits _{\{x,y\}\in V}P[x,y\notin Y(0,uN^{d})], \end{aligned}$$
(4.25)

and thus by (4.3)

$$\begin{aligned} P[F_{\rho }\notin \mathcal{G }]\le c|F|^{-c\rho }+c\sum \limits _{\{x,y\}\in V}P[x,y\notin Y(0,uN^{d})]+c\sum \limits _{\{x,y\}\in W}q_{x,y}.\quad \end{aligned}$$
(4.26)

We seek to bound the sums \(\sum _{\{x,y\}\in V}P[x,y\notin Y(0,uN^{d})]\) and \(\sum _{\{x,y\}\in W}q_{x,y}\). To this end note that if \(\{x,y\} \in V\) then by (4.18) we have \(P[x,y\notin Y(0,uN^{d})] \le \; cQ_{0}[0,y-x\notin \mathcal{I }^{u}]\), where (4.18) applies because \(d_{\infty }(x,y)\le |F|^{1/2d}\le N^{1/2}\) (note \(|F| \le N^d\)), and \(1 \le u \le N^{c_{13}}\) (cf. (4.22)). Thus

$$\begin{aligned} \sum \limits _{\{x,y\}\in V}P[x,y\notin Y(0,uN^{d})]&\le \; c\sum \limits _{\{x,y\}\in V}Q_{0}[0,y-x\notin \mathcal{I }^{u}]\nonumber \\&\le \; c\sum \limits _{x\in F}\sum _{y\in K_{x}}Q_{0}[0,y-x\notin \mathcal{I }^{u}], \end{aligned}$$
(4.27)

where \(K_{x}=F\cap B(x,|F|^{\frac{1}{2d}})\backslash \{x\}\). Now using (2.24) on the inner sum of the right-hand side with \(K_{x}-x\) in place of \(K\), we get

$$\begin{aligned} \underset{\{x,y\}\in V}{\sum }P[x,y\notin Y(0,uN^{d})]&\mathop {\le }\limits ^{{u\ge 1}} c\underset{x\in F}{\sum }\big \{ |K_{x}|Q_{0}[0\notin \mathcal{I }^{u}]{}^{2}u+e^{-c_4\frac{u}{g(0)}}\big \} \nonumber \\&\mathop {\le }\limits ^{(4.20)} c\underset{x\in F}{\sum }\big \{ |K_{x}|Q_{0}[0\notin \mathcal{I }^{u}]{}^{2}\log |F|+|F|^{-c_4(1-\rho )}\big \} \nonumber \\&\le c|F|^{\frac{3}{2}}Q_{0}[0\notin \mathcal{I }^{u}]{}^{2}\log |F|+c|F|^{-c}, \end{aligned}$$
(4.28)

where we have used that \(\sum _{x\in F}|K_{x}|\le \sum _{x\in F}c(|F|^{\frac{1}{2d}})^{d}=c|F|^{\frac{3}{2}}\), and choose \(c_9\) small enough so that \(c_4(1-c_9)>1\) (recall that \(c_4>1\)). Thus from (4.21) we have

$$\begin{aligned} \underset{\{x,y\}\in V}{\sum \limits }P[x,y\notin Y(0,uN^{d})]\le c|F|^{2\rho -\frac{1}{2}}\log |F|+c|F|^{-c}\overset{\rho \le c}{\le }c|F|^{-c}. \end{aligned}$$
(4.29)

We now turn to the sum \(\sum _{\{x,y\}\in W}q_{x,y}\). Using (3.5) again we obtain

$$\begin{aligned} P[x\notin Y(0,uN^{d})]P[y\notin Y(0,uN^{d})]\ge (1-cN^{-c})Q_{0}[0\notin \mathcal{I }^{u}]^{2}. \end{aligned}$$
(4.30)

Also by (4.17) we have that if \(x\) and \(y\) are such that \(v=d_{\infty }(x,y)\ge |F|^{3c_{13}^{-1}\rho }\) then (similarly to (4.22) we have \(1\le u \le g(0)\log |F| \le |F|^{3\rho } \le v^{c_{13}}\), see (4.20) and note \(|F|\ge c(\rho )\))

$$\begin{aligned} P[x,y\notin Y(0,uN^{d})]\le (1+c|F|^{-3\rho })Q_{0}[0\notin \mathcal{I }^{u}]^{2}. \end{aligned}$$
(4.31)

Combining (4.30) and (4.31) we have

$$\begin{aligned} \underset{x,y\in F,d_{\infty }(x,y)\ge |F|^{3c_{13}^{-1}\rho }}{\sum }q_{x,y}\le c(|F|^{-3\rho }+cN^{-c})\underset{x,y\in F}{\sum \limits }Q_{0}[0\notin \mathcal{I }^{u}]^{2}\le c|F|^{-\rho },\nonumber \\ \end{aligned}$$
(4.32)

since (possibly decreasing \(c_9\))

$$\begin{aligned} N^{-c}\overset{|F|\le N^{d}}{\le }|F|^{-c}\le |F|^{-3c_9}\le |F|^{-3\rho } \quad \text{ and} \quad \sum \limits _{x,y\in F}Q_{0}[0\notin \mathcal{I }^{u}]^{2}\overset{(4.21)}{=}|F|^{2\rho }. \end{aligned}$$

Possibly decreasing \(c_9\) once again, we have that all \( 0 < \rho \le c_9\) satisfy \(3c_{13}^{-1}\rho \le \frac{1}{2d}\). Then \(|F|^{3c_{13}^{-1}\rho }\le |F|^{\frac{1}{2d}}\) so that from the definition of \(W\) and (4.32)

$$\begin{aligned} \underset{\{x,y\}\in W}{\sum \limits }q_{x,y}\le c|F|^{-\rho }. \end{aligned}$$
(4.33)

Now using (4.29) and (4.33) on the right-hand side of (4.26) gives (4.4). \(\square \)

We have now completely reduced the proof of Theorem 3.1 to the coupling result Theorem 3.2. We end this section with a remark on the use of the coupling Theorem 3.2 as a general tool, and on the possibility of extending Theorem 3.1 to other families of graphs.

Remark 4.6

(1) As mentioned in the introduction, a coupling like Theorem 3.2 is a very powerful tool to study the trace of random walk. To prove the cover time result Theorem 3.1 we used the coupling to study certain properties of the trace of the random walk; namely the probabilities that points, pairs of points, and sets of “distant” points are contained in the trace (see Lemma 3.3, Lemma 4.5 and Lemma 4.3 respectively). When studying the percolative properties of the so called vacant set (the complement of the trace of random walk), similar couplings have been used, and there the properties of the trace studied are certain connnectivity properties of its complement (see e.g. (2.4), the display after (2.14) or (2.20)–(2.22) in [25], or (7.18) in [21]). The generality of the coupling Theorem 3.2 ensures that it can be used in the future to study further unrelated properties of the trace of random walk in the torus.

(2) The method used in this section to prove Gumbel fluctuations essentially consists of considering the set of “late points” (recall (4.1)) and proving that it concentrates and is separated (i.e. (4.2)). It has already been used to prove Gumbel fluctuations in related models in [5] and [6], and could potentially apply to prove Gumbel fluctuations for many families of graphs, as long as one can obtain good enough control of entrance times to replace (3.5), (4.5) and (4.9) (in a more general context the latter estimate may be difficult to obtain but could be replaced with an estimate on how close \(H_{\{x_{1},x_{2},\ldots ,x_{m}\}}\) is to being exponential when \(x_{1},\ldots ,x_{m}\) are “separated”, since the cover time of a set \(\{x_{1},\ldots ,x_{m}\}\) consisting of separated points is essentially the sum of \(m\) entrance times for sets consisting of \(m,m-1,m-2,\ldots \) and finally \(1\) points; from this one can derive something similar to (4.2)). In a forthcoming work Roberto Oliveira and Alan Paula obtain such a generalization of Theorem 3.1. \(\square \)

5 Coupling

We now turn to the proof of the coupling result Theorem 3.2. The proof has three main steps: first the trace of random walk in the union of the boxes \(A+x_{i},i=1,\ldots ,n,\) (recall (3.2) and (3.3)) is coupled with a certain Poisson point process on the space of trajectories \(\Gamma (\mathbb{T }_{N})\) (see below (2.1)). From this Poisson point process we then construct \(n\) independent Poisson point processes, one for each box \(A+x_{i}\), which are coupled with the trace of random walk in the corresponding box. Lastly we construct from each of these Poisson processes a random interlacement which is coupled with the trace of the random walk in the corresponding box \(A+x_{i}\). Essentially speaking the three steps are contained in the three Propositions 5.1, 5.2 and 5.4, which we state in this section and use to prove Theorem 3.2. The proofs of the propositions are postponed until the subsequent sections.

For the rest of the paper we assume that we are given centres of boxes

$$\begin{aligned} x_{1},\ldots ,x_{n}\in \mathbb{T }_{N}\; \text{ whose} \text{ separation} \text{ is} \; s \,\text{(see} \text{(3.2)),} \text{ and}\ \varepsilon \in (0,1). \end{aligned}$$
(5.1)

We also define the concentric boxes \(B\subset C\) around \(A\) by

$$\begin{aligned} A\mathop {=}\limits ^{(3.3)}B(0,s^{1-\varepsilon })\subset B=B(0,s^{1-\frac{\varepsilon }{2}})\subset C=B(0,s^{1-\frac{\varepsilon }{4}}). \end{aligned}$$
(5.2)

For convenience we introduce the notation

$$\begin{aligned} \bar{F}= \bigcup \limits _{i=1}^{n} F_{i} \text{ where} \,F_{i}=F+x_{i}\quad \text{ for} \text{ any} \,\,F\subset \mathbb{T }_{N}. \end{aligned}$$
(5.3)

Note that if \(s\ge c(\varepsilon )\) then the \(C_{i}\) are disjoint. To state Proposition 5.1 we introduce \(U\), the first time random walk spends a “long” time outside of \(\bar{C}\) (roughly speaking long enough time to mix, see Proposition 6.1), defined by

$$\begin{aligned} U=\inf \{t\ge t^{\star }:Y(t-t^{\star },t)\cap \bar{C}=\emptyset \}\quad \text{ where}\,t^{\star }=N^{2+\frac{\varepsilon }{100}}. \end{aligned}$$
(5.4)

We also define the intensity measure \(\kappa _{1}\) on \(\Gamma (\mathbb{T }_{N})\) by (recall (2.6))

$$\begin{aligned} \kappa _{1}(dw)=P_{e}[Y_{\cdot \wedge U}\in dw]\quad \text{ where} \,e(x)=\sum \limits _{i=1}^{n}e_{A}(x-x_{i}). \end{aligned}$$
(5.5)

For parameters \(u\ge 0\) and \(\delta >0\) (satisfying suitable conditions), Proposition 5.1 constructs a coupling of \(Y_{\cdot }\) with two independent Poisson point processes \(\mu _{1}\) and \(\mu _{2}\) on \(\Gamma (\mathbb{T }_{N})\) of intensities \(u(1-\delta )\kappa _{1}\) and \(2u\delta \kappa _{1}\) respectively such that (recall the notation from (2.16))

$$\begin{aligned} \left\{ \mathcal{I }(\mu _{1})\cap \bar{A}\subset Y(0,uN^{d})\cap \bar{A}\subset \mathcal{I }(\mu _{1}+\mu _{2}) \cap \bar{A}\right\} \text{ with} \text{ high} \text{ probability.}\quad \end{aligned}$$
(5.6)

Proposition 5.2, the second ingredient in the proof of Theorem 3.2, couples Poisson processes like \(\mu _{1}\) and \(\mu _{2}\) with Poisson processes with intensity a multiple of

$$\begin{aligned} \kappa _{2}(dw)=P_{e}[Y_{\cdot \wedge T_{\bar{B}}}\in dw]. \end{aligned}$$
(5.7)

More precisely if \(\nu \) is a Poisson process of intensity \(u\kappa _{1},u>0,\) and \(\delta >0\) then (under appropriate conditions) Proposition 5.2 will construct Poisson point processes \(\nu _{1}\) and \(\nu _{2}\) of intensities \(u(1-\delta )\kappa _{2}\) and \(2u\delta \kappa _{2}\) respectively such that

$$\begin{aligned}&\left\{ \mathcal{I }(\nu _{1})\cap \bar{A}\subset \mathcal{I }(\nu )\cap \bar{A}\right\} \text{ almost} \text{ surely} \text{ and}\end{aligned}$$
(5.8)
$$\begin{aligned}&\left\{ \mathcal{I }(\nu )\cap \bar{A}\subset \mathcal{I }(\nu _{1}+\nu _{2})\cap \bar{A}\right\} \text{ with} \text{ high} \text{ probability.} \end{aligned}$$
(5.9)

Note that, in contrast to the situation for \(\mu _{1}\) and \(\mu _{2}\) from (5.6), each “excursion” in the support of \(\nu _{1}\) and \(\nu _{2}\) never returns to \(\bar{A}\) after it has left \(\bar{B}\). Under the law induced on it from the intensity measure \(\kappa _{2}\), an excursion therefore, conditionally on its starting point, has the law of a random walk in \(\mathbb{Z }^{d}\) stopped upon leaving \(B\) (up to translation). Furthermore it leaves a trace in only one of the boxes \(A_{1},\ldots ,A_{n}\). This will allow us (in Corollary 5.3) to “split” the Poisson point processes \(\nu _{1}\) and \(\nu _{2}\) into independent Poisson point processes \(\nu _{1}^{i},\nu _{2}^{i},i=1,\ldots ,n,\) (on \(\Gamma (\mathbb{Z }^{d})\)) such that the \(\nu _{1}^{i}\) have intensity \(u(1-\delta )\kappa _{3}\) and the \(\nu _{2}^{i}\) have intensity \(2u\delta \kappa _{3}\), where

$$\begin{aligned}&\kappa _{3}(dw)=P_{e_{A}}^{\mathbb{Z }^{d}}[Y_{\cdot \wedge T_{B}}\in dw], \text{ and} \text{ such} \text{ that}\quad \end{aligned}$$
(5.10)
$$\begin{aligned}&\left\{ \mathcal{I }(\nu _{1}^{i})\cap A\subset (\mathcal{I }(\nu )-x_{i})\cap A \text{ for} \text{ all} \,\,i\right\} \text{ almost} \text{ surely} \text{ and}\quad \end{aligned}$$
(5.11)
$$\begin{aligned}&\left\{ (\mathcal{I }(\nu )-x_{i})\cap A\subset \mathcal{I }(\nu _{1}^{i}+\nu _{2}^{i})\cap A \text{ for} \text{ all} \,\,i\right\} \text{ with} \text{ high} \text{ probability.}\quad \end{aligned}$$
(5.12)

Proposition 5.4, the third ingredient in the proof of Theorem 3.2, constructs independent random subsets of \(\mathbb{Z }^{d}\) that have the law of random interlacements intersected with \(A\), from Poisson processes like \(\nu _{j}^{i}\). More precisely if \(\eta \) is a Poisson point process of intensity \(u\kappa _{3},u\ge 0,\) and \(\delta >0\) then under appropriate conditions it constructs independent random sets \(\mathcal{I }_{1},\mathcal{I }_{2}\subset \mathbb{Z }^{d}\) such that \(\mathcal{I }_{1}\) has the law of \(\mathcal{I }^{u(1-\delta )}\cap A\) under \(Q_{0}\), \(\mathcal{I }_{2}\) has the law of \(\mathcal{I }^{2\delta u}\cap A\) under \(Q_{0}\), and

$$\begin{aligned} \left\{ \mathcal{I }_{1}\cap A\subset \mathcal{I }(\eta )\cap A\subset (\mathcal{I }_{1}\cup \mathcal{I }_{2})\cap A\right\} \text{ with} \text{ high} \text{ probability.} \end{aligned}$$
(5.13)

But essentially speaking because of (2.22) we will be able to easily construct a random interlacement \((\mathcal{I }^{u})_{u\ge 0}\) from such a pair \(\mathcal{I }_{1},\mathcal{I }_{2}\).

We now state the propositions. Recall the standing assumption (5.1).

Proposition 5.1

(\(d\ge 3\),\(N\ge 3\),\(x_{1},\ldots ,x_{n}\in \mathbb{T }_{N}\)) If \(s\ge c(\varepsilon )\), \(u\ge s^{-c(\varepsilon )}\), \(\frac{1}{2}\ge \delta \ge cs^{-c(\varepsilon )}\) and \(n\le s^{c(\varepsilon )}\) we can construct a coupling \((\Omega _{2},\mathcal{A }_{2},Q_{2})\) of the random walk \(Y_{\cdot }\) with law \(P\) and independent Poisson point processes \(\mu _{1}\) and \(\mu _{2}\) on \(\Gamma (\mathbb{T }_{N})\), such that \(\mu _{1}\) has intensity \(u(1-\delta )\kappa _{1}\), \(\mu _{2}\) has intensity \(2u\delta \kappa _{1}\), and \(Q_{2}[I_{1}]\ge 1-cue^{-cs^{c(\varepsilon )}}\), where \(I_{1}\) is the event in (5.6).

Proposition 5.1 will be proved in Sect. 7.

Proposition 5.2

(\(d\ge 3\),\(N\ge 3\),\(x_{1},\ldots ,x_{n}\in \mathbb{T }_{N}\)) Assume \(s\ge c(\varepsilon )\) and that \(\nu \) is a Poisson point process on \(\Gamma (\mathbb{T }_{N})\) with intensity measure \(u\kappa _{1}\), \(u\ge s^{-c(\varepsilon )}\), constructed on some probability space \((\Omega ,\mathcal{A },Q)\). Then if \(1\ge \delta \ge cs^{-c(\varepsilon )}\) and \(n\le s^{c(\varepsilon )}\) we can extend the space to get independent Poisson point processes \(\nu _{1},\nu _{2},\) on \(\Gamma (\mathbb{T }_{N})\) such that \(\nu _{1}\) has intensity \(u(1-\delta )\kappa _{2}\), \(\nu _{2}\) has intensity \(2u\delta \kappa _{2}\), (5.8) holds and \(Q[I_{2}]\ge 1-ce^{-cs^{c(\varepsilon )}}\!\), where \(I_{2}\) is the event in (5.9).

The proof of Proposition 5.2 is contained in Sect. 8. In the proof of Theorem 3.2 we will actually use the following corollary.

Corollary 5.3

Under the conditions of Proposition 5.2 we can construct independent Poisson point processes \(\nu _{1}^{i},\nu _{2}^{i},i=1,2,\ldots ,n,\) such that \(\nu _{1}^{i}\) has intensity \(u(1-\delta )\kappa _{3}\) and \(\nu _{2}^{i}\) has intensity \(2u\delta \kappa _{3}\) for \(i=1,\ldots ,n,\) (5.11) holds and \(Q[I_{3}]\ge 1-ce^{-cs^{c(\varepsilon )}}\), where \(I_{3}\) is the event in (5.12).

Proof

For \(i=1,\ldots ,n,\) and \(j=1,2,\) let \(\nu _{j}^{i}\) be the image of \(1_{\{Y_{0}\in A_{i}\}}\nu _{j}\) under the map which sends \(w(\cdot )\in \Gamma (B_{i})\subset \Gamma (\mathbb{T }_{N})\) to \(w(\cdot )-x_{i}\in \Gamma (\mathbb{Z }^{d})\) (recall that \(\Gamma (B_{i})\) for \(B_{i}\subset \mathbb{T }_{N}\) denotes the set of paths in \(\mathbb{T }_{N}\) that never leave \(B_{i}\), and note that \(B=B_{i}-x_{i}\subset \mathbb{T }_{N}\) may be identified with a subset of \(\mathbb{Z }^{d}\), so that \(w(\cdot )-x_{i}\) can be identified with an element of \(\Gamma (\mathbb{Z }^{d}\))). Since the sets \(\{Y_{0}\in A_{i}\},i=1,\ldots ,n,\) are disjoint we have that \(\nu _{i}^{1},\nu _{i}^{2},i=1,\ldots ,n,\) are independent Poisson point processes of the required intensities (see the (5.5), (5.7) and (5.10)). Now (5.11) and the required bound on \(Q[I_{3}]\) follows from Proposition 5.2 (see (5.8) and (5.9)), since \((\mathcal{I }(\nu _{j})-x_{i})\cap A=\mathcal{I }(\nu _{j}^{i})\cap A\) for all \(i\) and \(j\). \(\square \)

We now state the proposition which couples processes like \(\nu _{j}^{i}\) with random interlacements. Note that we will apply this proposition after “decoupling” the boxes \(A_{1},\ldots ,A_{n},\) using Corollary 5.3, and that the statement of Proposition 5.4 therefore does not refer to these boxes or the centres \(x_{1},\ldots x_{n},\) except through their separation \(s\) (recall (3.2)), which goes into the definition of the radii of the boxes \(A\) and \(B\) (see (3.3) and (5.2)). The interpretation of \(s\) as separation is therefore irrelevant, and for the purposes of the following proposition it can be simply considered as a parameter that (together with \(\varepsilon \)) determines the radii of \(A\) and \(B\).

Proposition 5.4

(\(d\ge 3\)) Let \(\eta \) be a Poisson point process on \(\Gamma (\mathbb{Z }^{d})\) with intensity measure \(u\kappa _{3}\), \(u\ge 0\), constructed on some probability space \((\Omega ,\mathcal{A },Q)\). If \(s\ge c(\varepsilon )\) and \(1\ge \delta \ge cs^{-c(\varepsilon )}\) then we can construct a probability space \((\Omega ^{\prime },\mathcal{A }^{\prime },Q^{\prime })\) and (on the product space) independent \(\sigma (\eta )\times \mathcal{A }\)-measurable random sets \(\mathcal{I }_{1},\mathcal{I }_{2}\subset \mathbb{Z }^{d}\) such that \(\mathcal{I }_{1}\) has the law of \(\mathcal{I }^{u(1-\delta )}\cap A\) under \(Q_{0}\), \(\mathcal{I }_{2}\) has the law of \(\mathcal{I }^{2u\delta }\cap A\) under \(Q_{0}\), and \(Q\otimes Q^{\prime }[I_{4}]\ge 1-ce^{-cs^{c(\varepsilon )}}\!\), where \(I_{4}\) is the event in (5.13).

Proposition 5.4 will be proved in Sect. 9.

We are now ready to start the proof of Theorem 3.2. We will apply Proposition 5.1 to the random walk, then apply Corollary 5.3 to the resulting Poisson point processes, and finally apply Proposition 5.4 to the Poisson point processes resulting from Corollary 5.3. This gives us random subsets of \(\mathbb{Z }^{d}\) from which we will construct random interlacements.

Proof of Theorem 3.2

Throughout the proof we decrease \(c_{5}(\varepsilon )\) whenever necessary so that the conditions on \(u,\delta \) and \(n\) needed for Proposition 5.1, Corollary 5.3 or Proposition 5.4 to hold are fulfilled. We first apply Proposition 5.1 with \(\frac{\delta }{14}\) in place of \(\delta \) to get the space \((\Omega _{2},\mathcal{A }_{2},Q_{2})\) and independent Poisson point processes \(\mu _{1}\) and \(\mu _{2}\) such that \(\mu _{1}\) has intensity \(u(1-\frac{\delta }{14})\kappa _{1}\), \(\mu _{2}\) has intensity \(u\frac{\delta }{7}\kappa _{1}\) and

$$\begin{aligned} Q_{2}[\mathcal{I }(\mu _{1})\cap \bar{A}\!\subset \! Y(0,uN^{d})\cap \bar{A}\subset \mathcal{I }(\mu _{1}\!+\!\mu _{2})\cap \bar{A}]\ge \!1\!-\!cue^{-cs^{c(\varepsilon )}} \text{ for}\,\,s\!\ge \! c(\varepsilon ).\nonumber \\ \end{aligned}$$
(5.14)

Next we apply Corollary 5.3 once with \(\mu _{1}\) in place of \(\nu \), \(u(1-\frac{\delta }{14})\) in place of \(u\) and \(\frac{\delta }{14}\) in place of \(\delta \) and extend the space \((\Omega _{2},\mathcal{A }_{2},Q_{2})\) to get the space \((\Omega _{1},\mathcal{A }_{1},Q_{1})\) with Poisson point processes \(\nu _{1}^{i},\nu _{2}^{i}\) of intensities \(u(1-\frac{\delta }{14})^{2}\kappa _{3}\) and \(u(1-\frac{\delta }{14})\frac{\delta }{7}\kappa _{3}\) such that \(\nu _{1}^{i},\nu _{2}^{i},i=1,\ldots ,n,\mu _{2}\) are mutually independent and (for \(s\ge c(\varepsilon )\))

$$\begin{aligned} Q_{1}[\mathcal{I }(\nu _{1}^{i})\cap A\subset (\mathcal{I }(\mu _{1})-x_{i})\cap A\subset \mathcal{I }(\nu _{1}^{i}+\nu _{2}^{i})\cap A \text{ for} \text{ all} \,i]\ge 1-ce^{-cs^{c(\varepsilon )}}.\nonumber \\ \end{aligned}$$
(5.15)

For convenience we may “thicken” each \(\nu _{2}^{i}\) so that they have intensity \(u\frac{\delta }{7}\kappa _{3}\), while preserving the independence of \(\nu _{1}^{i},\nu _{2}^{i},i=1,\ldots ,n,\mu _{2}\) and the validity (5.15) (by extending the space with independent Poisson point processes of intensities given by an appropriate multiple of \(\kappa _3\), and adding them to the original \(\nu _2^i\)). Repeating this extension but with \(\mu _{2}\) in place of \(\mu \), \(u\frac{\delta }{7}\) in place of \(u\) and \(1\) in place of \(\delta \), we furthermore get processes \(\nu _{3}^{i}\) of intensity \(u\frac{2\delta }{7}\kappa _{3}\) (arising from the \(\nu _{2}^{i}\) in the statement of Corollary 5.3, the \(\nu _{1}^{i}\) in the statement of Corollary 5.3 are zero since \(u(1-\delta )=0\)) such that \(\nu _{1}^{i},\nu _{2}^{i},\nu _{3}^{i},i=1,\ldots ,n\) are mutually independent and

$$\begin{aligned} Q_{1}[(\mathcal{I }(\mu _{2})-x_{i})\cap A\subset \mathcal{I }(\nu _{3}^{i})\cap A \text{ for} \text{ all} \,i]\ge 1-ce^{-cs^{c(\varepsilon )}}\quad \text{ for} \,s\ge c(\varepsilon ).\quad \end{aligned}$$
(5.16)

Now apply Proposition 5.4 with \(u(1-\frac{\delta }{14})^{2}\) in place of \(u\), \(\frac{\delta }{14}\) in place of \(\delta \), and \(\nu _{1}^{1}\) in place of \(\mu \) and extend the space with mutually independent sets \(\mathcal{I }_{1,1},\mathcal{I }_{2,1}\) (independent of \(\nu _{1}^{j},\nu _{2}^{i},\nu _{3}^{i},j\ge 2,i\ge 1\)) such that \(\mathcal{I }_{1,1}\) has the law of \(\mathcal{I }^{u(1-\frac{\delta }{14})^{3}}\cap A\) under \(Q_{0}\), \(\mathcal{I }_{2,1}\) has the law of \(\mathcal{I }^{u(1-\frac{\delta }{14})^{2}\frac{\delta }{7}}\cap A\) under \(Q_{0}\), and \(Q_{1}[\mathcal{I }_{1,1}\cap A\subset \mathcal{I }(\nu _{1}^{1})\cap A\subset (\mathcal{I }_{1,1}\cup \mathcal{I }_{2,1})\cap A]\ge 1-cue^{-cs^{c}}\) for \(s\ge c(\varepsilon )\). Then apply Proposition 5.4 once again with \(u\frac{3\delta }{7}\) in place of \(u\), \(1\) in place of \(\delta \) and \(\nu _{2}^{1}+\nu _{3}^{1}\) (which is a Poisson point process of intensity \(u\frac{3\delta }{7}\kappa _{3}\)) in place of \(\eta \), to extend the space with a random set \(\mathcal{I }_{3,1}\) (independent of \(\mathcal{I }_{1,1},\mathcal{I }_{2,1},\nu _{1}^{i},\nu _{2}^{i},\nu _{3}^{i},i\ge 2\)) such that \(\mathcal{I }_{3,1}\) has the law of \(\mathcal{I }^{u\frac{6\delta }{7}}\) under \(Q_{0}\), and such that \(Q_{1}[\mathcal{I }(\nu _{2}^{1}+\nu _{3}^{1})\cap A\subset \mathcal{I }_{3,1}\cap A]\ge 1-cue^{-cs^{c}}\) for \(s\ge c(\varepsilon )\) (similarly to before, \(\mathcal{I }_{3,1}\) arises from the \(\mathcal{I }_{2}\) of the statement of Proposition 5.4, \(\mathcal{I }_{1}\) is empty since \(u(1-\delta )=0\)). We can repeat this for \(i=2,3,\ldots ,n,\) each time extending the space, to get mutually independent sets \(\mathcal{I }_{1,i},\mathcal{I }_{2,i},\mathcal{I }_{3,i},i=1,\ldots ,n,\) such that for each \(j=1,2,3\) the \(\mathcal{I }_{j,i},i=1,\ldots ,n,\) have the same law, and for all \(i\) and \(s\ge c(\varepsilon )\)

$$\begin{aligned} \begin{aligned}&Q_{1}[\mathcal{I }_{1,i}\, \cap \, A\subset \mathcal{I }(\nu _{1}^{i})\,\cap \, A \subset (\mathcal{I }_{1,i}\cup \mathcal{I }_{2,i})\,\cap \, A\\&\qquad \mathrm{and} \quad \mathcal{I }(\nu _{2}^{i} + \nu _{3}^{i}) \cap A \subset \mathcal{I }_{3,i}\cap A] \ge 1 - cue^{-cs^{c}}. \end{aligned} \end{aligned}$$
(5.17)

By (5.14), (5.15), (5.16) and (5.17) we have for all \(i\) and (possibly decreasing \(c_{5}\) and recalling that \(u\ge s^{-c(\varepsilon )}\)) that for \(s\ge c(\varepsilon )\)

$$\begin{aligned} Q_{1}[\mathcal{I }_{1,i}\cap A\!\subset \!(Y(0,uN^{d})-x_{i})\cap A\!\subset \!(\mathcal{I }_{1,i}\cup \mathcal{I }_{2,i}\cup \mathcal{I }_{3,i})\cap A]\ge 1\!-\!c_{5}^{-1}ue^{-c_{5}^{-1}s^{c_{5}}}.\nonumber \\ \end{aligned}$$
(5.18)

It now only remains to construct “proper” random interlacements from \(\mathcal{I }_{1,i},\mathcal{I }_{2,i},\mathcal{I }_{3,i},i=1,\ldots ,n.\)

By (2.22) the \(\mathcal{I }_{2,i}\,\cup \,\mathcal{I }_{3,i}\) have the law of \(\mathcal{I }^{u_{2}}\,\cap \, A\) under \(Q_{0}\), where \(u_{2}=u(1-\frac{\delta }{14})^{2}\frac{\delta }{7}+u\frac{6\delta }{7}\). Once again by (2.22) the pair \((\mathcal{I }_{1,i}\cap A,(\mathcal{I }_{1,i}\cup \mathcal{I }_{2,i}\cup \mathcal{I }_{3,i})\cap A)\) has the law of \((\mathcal{I }^{u_{1}}\,\cap \, A,\mathcal{I }^{u_{1}+u_{2}}\,\cap \, A)\) under \(Q_{0}\), where \(u_{1}=u(1-\frac{\delta }{14})^{3}\). But this pair takes only finitely many values (so that the set of values that are taken with positive probability together have probability one), so we can, by “sampling from the conditional law (under \(Q_{0}\)) of \((\mathcal{I }^{u})_{u\ge 0}\) given \((\mathcal{I }^{u_{1}}\cap A,\mathcal{I }^{u_{1}+u_{2}}\cap A)\)”, construct for each \(i=1,\ldots ,n,\) a family \((\mathcal{I }_{i}^{u})_{u\ge 0}\) with the law of \((\mathcal{I }^{u})_{u\ge 0}\) under \(Q_{0}\) such that \(\mathcal{I }_{i}^{u(1-\delta )}\cap A\subset \mathcal{I }_{i}^{u_{1}}\cap A=\mathcal{I }_{1,i}\cap A\) (recall (2.21) and note \(u(1-\delta )\le u(1-\frac{3\delta }{14})\le u_{1}\)) almost surely and \((\mathcal{I }_{1,i}\cup \mathcal{I }_{2,i}\cup \mathcal{I }_{3,i})\cap A=\mathcal{I }_{i}^{u_{1}+u_{2}}\cap A\subset \mathcal{I }_{i}^{u(1+\delta )}\) (note \(u_{1}+u_{2}\le u+u\frac{\delta }{7}+u\frac{6\delta }{7}=\) \(u(1+\delta )\)) almost surely, which combined with (5.18) implies (3.4) for \(s\ge c(\varepsilon )\). But by adjusting constants (3.4) holds for all \(s\), so the proof of Theorem 3.2 is complete. \(\square \)

Theorem 3.2 (and therefore also Theorem 3.1) has now been reduced to Proposition 5.1, Proposition 5.2 and Proposition 5.4.

6 Quasistationary distribution

In this section we introduce the quasistationary distribution, which is a probability distribution on \(\mathbb{T }_{N}\backslash \bar{C}\) (recall our standing assumption (5.1), (5.2) and (5.3)) denoted by \(\sigma (\cdot )\) and which will be an essential tool when we prove (in Sect. 7) the coupling Proposition 5.1.

The main result is Proposition 6.1, which says that for all \(x,y\in \mathbb{T }_{N}\backslash \bar{C}\) the probability \(P_{x}[Y_{t}=y|H_{\bar{C}}>t^{\star }]\) (recall the definition of \(t^{\star }\) from (5.4)) is very close to \(\sigma (y)\) (and thus almost independent of \(x\)). The result will allow us to show, in Sect. 7, that regardless of where the random walk \(Y_{\cdot }\) starts, \(Y_{U}\) (where \(U\) was defined in (5.4)) is very close in distribution to the quasistationary distribution, and this in turn will let us “cut the random walk” \(Y_{\cdot }\) into almost independent excursions, each with law close to \(P_{\sigma }[Y_{\cdot \wedge U}\in dw]\) (cf. (5.5)). This will be the main step in constructing the Poisson processes \(\mu _{1}\) and \(\mu _{2}\) from the statement of Proposition 5.1.

At the end of this section we also give a result that says that the hitting distribution on \(\partial _{i}\bar{A}\) when starting random walk from the quasistationary distribution is approximately the normalized sum of the equilibrium distributions on \(A_{1},A_{2},\ldots ,A_{n}\) (see (6.19)). This result will be used several times in the subsequent sections.

Let us now formally introduce the quasistationary distribution. We define the \((N^{d}-|\bar{C}|)\times (N^{d}-|\bar{C}|)\) matrix \((P^{\bar{C}})_{x,y\in \mathbb{T }_{N}\backslash \bar{C}}=\frac{1}{2d}1_{\left\{ x\sim y\right\} },\)where \(x\sim y\) means that \(x\) and \(y\) share an edge in \(\mathbb{T }_{N}\). When \(s\ge c(\varepsilon )\), so that \(\mathbb{T }_{N}\backslash \bar{C}\) is connected, the Perron–Frobenius theorem (Theorem 8.2, p. 151 in [19]) implies that this (real symmetric, non-negative and irreducible) matrix has a unique largest eigenvalue \(\lambda _{1}^{\bar{C}}\) with a non-negative normalized eigenvector \(v_{1}\). We let \(\lambda _{2}^{\bar{C}}\) denote the second largest eigenvalue of \(P^{\bar{C}}\). The quasistationary distribution \(\sigma \) on \(\mathbb{T }_{N}\backslash \bar{C}\) is then defined by

$$\begin{aligned} \sigma (x)=\frac{(v_{1})_{x}}{v_{1}^{T}\mathbf{1}}\quad \text{ for} \; x\in \mathbb{T }_{N}\backslash \bar{C}. \end{aligned}$$
(6.1)

Since \(\mathbb{T }_{N}\backslash \bar{C}\) is connected (when \(s\ge c(\varepsilon )\)) it holds that (see (6.6.3), p. 91 in [11])

$$\begin{aligned} \lim _{t\rightarrow \infty }P_{x}[Y_{t}=y|H_{\bar{C}}>t]=\sigma (y)\quad \text{ for} \text{ all}\,\,x,y\in \mathbb{T }_{N}\backslash \bar{C}. \end{aligned}$$
(6.2)

Proposition 6.1, the main result of the section, is a quantitative version of (6.2), which we now state. Recall once again the assumption (5.1), and the definition of \(t^{\star }\) in (5.4).

Proposition 6.1

(\(d\ge 3\), \(N\ge 3\)) If \(n\le s^{c(\varepsilon )}\) and \(s\ge c(\varepsilon )\) then

$$\begin{aligned} \sup _{x,y\in \mathbb{T }_{N}\backslash \bar{C}}\left|P_{x}[Y_{t^{\star }}=y|H_{\bar{C}}>t^{\star }]-\sigma (y)\right|\le ce^{-cN^{c(\varepsilon )}}. \end{aligned}$$
(6.3)

To prove Proposition 6.1 we will express \(P_{x}[Y_{t}=y|H_{\bar{C}}>t^{\star }]\) in terms of the matrix \(P^{\bar{C}}\), and then use the spectral expansion of \(P^{\bar{C}}\) to prove that \(P_{x}[Y_{t}=y|H_{\bar{C}}>t^{\star }]\) is close to \(\frac{(v_{1})_{y}}{v_{1}^{T}\mathbf{1}}\). To control the error we will need an estimate of the spectral gap of \(P^{\bar{C}}\), which we obtain in Lemma 6.3, and a lower bound on the minimum of \(\sigma (\cdot )\), which we obtain in Lemma 6.4. This is the approach taken to prove Lemma 3.9 in [25], which is essentially the same result when \(n=1\) so that \(\bar{C}\) consists of only one box. Since for us \(\bar{C}\) consists of many boxes, bounding the minimum of \(\sigma (\cdot )\) is harder, and achieving a good enough bound will consume most of our efforts in this section.

We prove Proposition 6.1 after introducing Lemma 6.3 and Lemma 6.4. To prove Lemma 6.3 we will need the following lemma, which roughly speaking says that \(E[H_{V}]\approx \frac{N^{d}}{\text{ cap}(V)}\) for appropriate sets \(V\subset \mathbb{T }_{N}\).

Lemma 6.2

(\(d\ge 3\), \(N\ge 3\)) For any (non-empty) \(V\subset \bar{C}\) let \(V^{i}=(V\cap C_{i})-x_{i}\subset \mathbb{Z }^{d},i=1,\ldots ,n\). Then if \(s\ge c(\varepsilon )\) we have

$$\begin{aligned} \frac{N^{d}}{E[H_{V}]\sum _{i=1}^{n}\,\mathrm{cap}(V^{i})}\le 1+c(\varepsilon )s^{-c(\varepsilon )}. \end{aligned}$$
(6.4)

Furthermore if \(V\subset \bar{B}\) and \(n\le s^{c(\varepsilon )}\) then

$$\begin{aligned}&1-c(\varepsilon )s^{-c(\varepsilon )}\le \frac{N^{d}}{E[H_{V}]\sum _{i=1}^{n} \mathrm{cap}(V^{i})},\quad \text{ and} \end{aligned}$$
(6.5)
$$\begin{aligned}&(1-c(\varepsilon )s^{-c(\varepsilon )})E[H_{V}]\le \underset{x\notin \bar{C}}{\inf }E_{x}[H_{V}]\le \underset{x\in \mathbb{T }_ {N}}{\sup }E_{x}[H_{V}]\le (1+cN^{-c(\varepsilon )})E[H_{V}].\nonumber \\ \end{aligned}$$
(6.6)

The proof of Lemma 6.2 is contained in the appendix. We are now ready to prove Lemma 6.3 about the spectral gap of \(P^{\bar{C}}\).

Lemma 6.3

(\(d\ge 3\), \(N\ge 3\)) If \(n\le s^{c(\varepsilon )}\) and \(s\ge c(\varepsilon )\) we have

$$\begin{aligned} \lambda _{1}^{\bar{C}}-\lambda _{2}^{\bar{C}}\ge cN^{-2}. \end{aligned}$$
(6.7)

Proof

Lemma A.3 of [25] contains a proof for \(n=1\) (note that \(B\) in that lemma plays the role of \(\bar{C}\) in this lemma). The proof for \(n>1\) is almost identical; one replaces \(B\) with \(\bar{C}\) and the inequality \(E[H_{B}]\ge c(\varepsilon )N^{2+\frac{\varepsilon (d-2)}{2}}\) with

$$\begin{aligned} E[H_{\bar{C}}]\overset{(2.7),(5.2),(6.4)}{\ge }\frac{c(\varepsilon )N^{d}}{s^{(1-\frac{\varepsilon }{4})(d-2)}n}\overset{s\le N}{\ge }c(\varepsilon )s^{\frac{\varepsilon (d-2)}{8}}\frac{N^{2+ \frac{\varepsilon (d-2)}{8}}}{n}\overset{s\ge c(\varepsilon ),n\le N^{\frac{\varepsilon (d-2)}{16}}}{\ge }cN^{2+\frac{\varepsilon (d-2)}{16}}.\nonumber \\ \end{aligned}$$
(6.8)

We omit the details. \(\square \)

The bound on the minimum of \(\sigma (\cdot )\) comes from the following lemma.

Lemma 6.4

(\(d\ge 3\), \(N\ge 3\)) If \(s\ge c(\varepsilon )\) we have

$$\begin{aligned} \inf _{x\in \mathbb{T }_{N}\backslash \bar{C}}\sigma (x)\ge N^{-cn}. \end{aligned}$$
(6.9)

We will prove Lemma 6.4 after finishing the proof of Proposition 6.1.

Proof of Proposition 6.1

Note that \(P_{x}[Y_{t^{\star }}=y,H_{\bar{C}}>t^{\star }]=\delta _{x}^{T}e^{-t^{\star }(I-P^{\bar{C}})}\delta _{y}\) for \(x,y\in \mathbb{T }_{N}\backslash \bar{C}\), so

$$\begin{aligned} P_{x}[Y_{t^{\star }}=y|H_{\bar{C}}>t^{\star }]= \frac{\delta _{x}^{T}e^{-t^{\star }(I-P^{\bar{C}})} \delta _{y}}{\delta _{x}^{T}e^{-t^{\star }(I-P^{\bar{C}})} \mathbf{1}}, \end{aligned}$$
(6.10)

where \(\mathbf{1}\) denotes the vector \((1,\ldots ,1)\in \mathbb{R }^{N^{d}-|\bar{C}|}\). By the spectral theorem we have

$$\begin{aligned} e^{-t^{\star }(I-P^{\bar{C}})} = e^{-t^{\star }(1-\lambda _{1}^{\bar{C}})}v_{1}v_{1}^{T}+e^{-t^{\star }(1-\lambda _{2}^{\bar{C}})}R, \end{aligned}$$

where \(R\) is an operator onto the space orthogonal to \(v_{1}\) with operator norm \(1\) (we use the Euclidean norm on \(\mathbb{R }^{N^{d}-|\bar{C}|}\)). We thus see from (6.10) that if we let \(R^{\prime }=\frac{e^{-t^{\star }(\lambda _{1}^{\bar{C}}-\lambda _{2}^{\bar{C}})}}{\sigma (x)(v_{1}^{T}\mathbf{1})^{2}}R\) then

$$\begin{aligned} P_{x}[Y_{t^{\star }}=y|H_{\bar{C}}>t^{\star }]&\!=\!&\frac{(v_{1}) _{x}(v_{1})_{y}+\delta _{x}^{T}e^{-t^{\star }(\lambda _{1}^{\bar{C}} -\lambda _{2}^{\bar{C}})}R\delta _{y}}{(v_{1})_{x}(v_{1}^{T} \mathbf{1})+\delta _{x}^{T}e^{-t^{\star }(\lambda _{1}^{\bar{C}}-\lambda _{2}^{\bar{C}})}R \mathbf{1}}\nonumber \\&\mathop {=}\limits ^{(5.1)}&\frac{\sigma (y) +\delta _{x}^{T}R^{\prime }\delta _{y}}{1+\delta _{x}^{T}R^{\prime }\mathbf{1}}. \end{aligned}$$
(6.11)

Now since we require \(n\le s^{c(\varepsilon )}\) and \(s\ge c(\varepsilon )\) both Lemma 6.3 and Lemma 6.4 hold. Therefore

$$\begin{aligned} |R^{\prime }\delta _{y}|,|R^{\prime }\mathbf{1}|\le N^{d}\frac{e^{-t^{\star }(\lambda _{1}^{\bar{C}}- \lambda _{2}^{\bar{C}})}}{\sigma (x)|v_{1}^{T}\mathbf{1}|^{2}}\le ce^{-cN^{c(\varepsilon )}}, \end{aligned}$$

since

$$\begin{aligned}&e^{-t^{\star }(\lambda _{1}^{\bar{C}}-\lambda _{2}^{\bar{C}})} \mathop {\le }\limits ^{(4.4),(5.7)} e^{-cN^{\frac{\varepsilon }{100}}}, \sigma (x)\mathop {\ge }\limits ^{(5.9)}N^{-cn}\overset{n\le s^{\frac{\varepsilon }{200}}\le N^{\frac{\varepsilon }{200}}}{\ge }e^{-cN^{\frac{\varepsilon }{200}}\log N},\\&\quad |v_{1}^{T}\mathbf{1}|\ge |v_{1}|^{2}=1 \end{aligned}$$

and \(N\ge c(\varepsilon )\) (since \(s\ge c(\varepsilon )\)). Thus (6.3) follows from (6.11) (using again that \(N\ge c(\varepsilon )\)). This completes the proof of Proposition 6.1. \(\square \)

It still remains to prove Lemma 6.4. The proof will involve further concentric boxes \(D\), \(E\) and \(F\) such that \(A\subset B\subset C\subset D\subset E\subset F\) defined by

$$\begin{aligned} D=B(0,s^{1-\frac{\varepsilon }{8}})\subset E=B(0,s^{1-\frac{\varepsilon }{16}})\subset F=B(0,s^{1-\frac{\varepsilon }{32}}). \end{aligned}$$
(6.12)

Proof of Lemma 6.4

Let \(y\) be the maximum of \(\sigma (\cdot )\). Since \(\sigma (\cdot )\) is a probability distribution we have \(\sigma (y)\ge N^{-d}\). Also by reversibility we have for any \(x\notin \bar{C}\) and \(t\ge 0\) that \(P_{x}[Y_{t}=y,H_{\bar{C}}>t]=P_{y}[Y_{t}=x,H_{\bar{C}}>t]\) and thus

$$\begin{aligned} P_{y}[Y_{t}=x|H_{\bar{C}}>t]=P_{x}[Y_{t}=y|H_{\bar{C}}>t] \frac{P_{x}[H_{\bar{C}}>t]}{P_{y}[H_{\bar{C}}>t]}. \end{aligned}$$
(6.13)

Since by the Markov property \(P_{x}[H_{\bar{C}}>t]\ge P_{x}[H_{y}<H_{\bar{C}}]P_{y}[H_{\bar{C}}>t]\) we see, by taking the limit \(t\rightarrow \infty \) in (6.13) and using (6.2), that \(\sigma (x)\ge \sigma (y)P_{x}[H_{y}<H_{\bar{C}}]\ge N^{-d}P_{x}[H_{y}<H_{\bar{C}}]\). To prove (6.9) it thus suffices to show that

$$\begin{aligned} P_{x}[H_{y}<H_{\bar{C}}]\ge N^{-cn}\quad \text{ for} \text{ all} \,\,x,y\notin \bar{C}. \end{aligned}$$
(6.14)

For \(i=1,\ldots ,n\), and \(x\in D_{i}\backslash C_{i}\) (recall (5.3), (6.12)) it follows from a one-dimensional random walk estimate that \(P_{x}[T_{D_{i}}<H_{\bar{C}}]\ge N^{-1}\), so that by the Markov property \(P_{x}[H_{y}<H_{\bar{C}}]\ge N^{-1}\inf _{x^{\prime }\in \partial _{e}D_{i}}P_{x^{\prime }}[H_{y}<H_{\bar{C}}]\). If \(x\notin \bar{D}\) and \(y\in \bar{F}\backslash \bar{C}\) then we can use that by reversibility \(P_{x}[H_{y}<H_{\bar{C}}]=P_{y}[H_{x}<H_{\bar{C}}]\) and another one-dimensional random walk estimate to show \(P_{x}[H_{y}<H_{\bar{C}}]\ge N^{-1}\inf _{y^{\prime }\notin \partial _{e}\bar{F}}P_{x}[H_{y^{\prime }}<H_{\bar{C}}]\). To prove (6.14) and thus (6.9) it therefore suffices to show (recall \(y\notin \bar{F}\))

$$\begin{aligned} P_{x}[H_{y}<H_{\bar{C}}]\ge N^{-cn}\quad \text{ for} \text{ all} \,\,x\notin \bar{D},y\notin \bar{F}. \end{aligned}$$
(6.15)

Now fix \(y\notin \bar{F}\) and note that \(P_{x}[H_{y}<H_{\bar{C}}]\ge cs^{-c}\) for all \(x\in \partial _{e}(D+y)\), since \(P_{x}[H_{y}<T_{E+y}]\ge cs^{-c}\) (e.g. by Proposition 1.5.9, p. 35 in [12]) and \((E+y)\cap \bar{C}=\emptyset \), since \(s\ge c(\varepsilon )\). Therefore to prove (6.15) and thus (6.9) it suffices to show

$$\begin{aligned} P_{x_{1}}[H_{y}<H_{\bar{C}}]\ge N^{-cn}P_{x_{2}}[H_{y}<H_{\bar{C}}]\quad \text{ for} \text{ all} \,\,x_{1},x_{2}\notin \bar{D}\cup (y+D).\qquad \end{aligned}$$
(6.16)

Consider the function \(x\rightarrow P_{x}[H_{y}<H_{\bar{C}}]\). This function is non-negative and harmonic on \(\left(\bar{C}\cup \{y\}\right)^{c}\). Thus by the Harnack inequality (Theorem 1.7.2, p. 42 in [12]) we have, for any \(z\in \mathbb{T }_{N}\) and \(r\ge 0\) for which \(B(z,2(r+1))\cap (\bar{C}\cup \{y\})=\emptyset \), that

$$\begin{aligned} \inf _{x\in B(z,r+1)}P_{x}[H_{y}<H_{\bar{C}}]\ge c\sup _{x\in B(z,r+1)}P_{x}[H_{y}<H_{\bar{C}}]. \end{aligned}$$
(6.17)

To iterate this inequality we need the following lemma. \(\square \)

Lemma 6.5

(\(d\ge 3\), \(N\ge 3\)) If \(s\ge c(\varepsilon )\) one can cover \(\left(\bar{D}\cup (y+D)\right)^{c}\) by \(m\le cn\log N\) balls \(B(z_{i},r_{i}),i=1,\ldots ,m\), that satisfy \(B(z_{i},2(r_{i}+1))\cap (\bar{C}\cup \{y\})=\emptyset \).

Before proving Lemma 6.5 we use it to show (6.9). The balls \(B(z_{i},r_{i}+1)\) “overlap” and cover the connected set \(\left(\bar{D}\cup (y+D)\right)^{c}\) (recall \(s\ge c(\varepsilon )\)), so for any \(x_{1},x_{2}\in \left(\bar{D}\cup (y+D)\right)^{c}\) we can find a “path” of at most \(cn\log N\) balls \(B(z,r+1)\) satisfying (6.17), such that any two consecutive balls intersect, and such that \(x_{1}\) is in the first ball and \(x_{2}\) is in the last. Applying (6.17) at most \(cn\log N\) times along these “paths” yields (6.16). This completes the proof of (6.9), so only the proof of Lemma 6.5 remains.

Proof of Lemma 6.5

By a standard argument \(\left(B(0,\frac{N}{4})\right)^{c}\) can be covered by a bounded number of balls \(B(z,r)\) that satisfy \(B(z,4r+2)\cap B(0,\frac{N}{8})=\emptyset \) (when \(N\ge c\), which we may assume since we require \(s\ge c\)). Furthermore each of the annuli \(B(0,2^{l+1}s^{1-\frac{\varepsilon }{8}})\backslash B(0,2^{l}s^{1-\frac{\varepsilon }{8}}),l=0,1,2,3,\ldots \) can be covered by a bounded number of balls \(B(z,r)\) that satisfy \(B(z,4r+2))\cap B(0,2^{l-1}s^{1-\frac{\varepsilon }{8}})=\emptyset \) (since \(s^{1-\frac{\varepsilon }{8}}\ge c\)). Now combining at most \(c\log N\) coverings of annuli with the covering of \(\left(B(0,\frac{N}{4})\right)^{c}\) we get that (provided \(s\ge c(\varepsilon )\) so that \(C\subset B(0,\frac{1}{2}s^{1-\frac{\varepsilon }{8}})\))

$$\begin{aligned} \text{ one} \text{ can} \text{ cover} \,\,D^{c} \text{ by} \text{ at} \text{ most} \,\,c\log N \text{ balls} \,\,B(z,r) \text{ with} \,\,B(z,4r+2)\cap C=\emptyset .\nonumber \\ \end{aligned}$$
(6.18)

We can now use (6.18) to get for each \(x\in \{x_{1},\ldots ,x_{n},y\}\) a covering \(\mathcal{B }_{x}\) of \(\left(x+D\right)^{c}\) consisting of at most \(c\log N\) balls such that if \(B(v,r)\in \mathcal{B }_{x}\) then \(B(v,4r+2)\cap (x+C)=\emptyset \). We now combine the coverings by picking for every \(x\notin \bar{D}\cup (y+D)\) a ball from \(\mathcal{B }_{z(x)}\) that contains \(x\), where \(z(x)\) is a member of \(\{x_{1},\ldots ,x_{n},y\}\) of minimal \(d_{\infty }\) distance to \(x\). This gives a covering of \((\bar{D}\cup (y+D))^{c}\) by at most \(c(n+1)\log N\le cn\log N\) balls. Also if \(x\notin \bar{D}\cup (y+D)\) and \(x \in B(v,r)\in \mathcal{B }_{z(x)}\) then \(B(v,4r+2)\cap (z(x)+C)=\emptyset \), which implies \(d_{\infty }(v,z(x)) > s^{1-\frac{\varepsilon }{4}}+4r+2\). This in turn implies \(d_{\infty }(v,\{x_{1},\ldots ,x_{n},y\}) \ge d_{\infty }(x,z(x))-d_{\infty }(v,x) \ge d_{\infty }(v,z(x)) - 2d_{\infty }(v,x) > s^{1-\frac{\varepsilon }{4}}+2(r+1)\) (using that \(z(x)\) is of minimal distance to \(x\) and \(d_{\infty }(x,v)\le r\)), so that \(B(v,2(r+1))\cap (\bar{C}\cup \{y+C\})=\emptyset \). Thus we have the desired covering and the proof of Lemma 6.5 is complete. \(\square \)

This completes the proof of Lemma 6.4.

Finally we state and prove Lemma 6.6, which says that the hitting distribution on \(\partial _{i}\bar{A}\) when starting from \(\sigma \) is approximately the normalized sum of the equilibrium distributions on \(\partial _{i}A_{1},\ldots ,\partial _{i}A_{n}\).

Lemma 6.6

(\(d\ge 3\), \(N\ge 3\)) If \(s\ge c(\varepsilon )\) and \(n\le s^{c(\varepsilon )}\) then for all \(i=1,\ldots ,n,\)

$$\begin{aligned} \frac{e_{A}(x-x_{i})}{n\, \mathrm{cap}(A)}(1-cs^{-c(\varepsilon )})&\le P_{\sigma }[Y_{H_{\bar{A}}}=x]\nonumber \\&\le \frac{e_{A}(x-x_{i})}{n\, \mathrm{cap}(A)}(1+cs^{-c(\varepsilon )})\quad \text{ for} \text{ all} \,\,x\in \partial _{i}A_{i}.\nonumber \\ \end{aligned}$$
(6.19)

Proof

The proof is very similar to the proof of Lemma 3.10 in [25]. By redefining \(t^{\star }\) and \(U\) from [25] to agree with our definition in (5.4), replacing \(A\) with \(\bar{A}\), \(B\) with \(\bar{C}\), and the application of Lemma 3.9 from [25] with an application of Proposition 6.1 (which is allowed since we assume \(n\le s^{c(\varepsilon )}\) and \(s\ge c(\varepsilon )\)) the argument leading up to (3.42) in [25] becomes a proof of

$$\begin{aligned} |P_{x}[\tilde{H}_{\bar{A}}>U]-P_{\sigma }[Y_{H_{\bar{A}}}=x]\sum \limits _{y\in \partial _{i}\bar{A}}P_{y}[\tilde{H}_{\bar{A}}>U]|\le ce^{-cN^{c(\varepsilon )}}\quad \text{ for} \text{ all} \,\,x\in \partial _{i}\bar{A}.\nonumber \\ \end{aligned}$$
(6.20)

Furthermore note that by (2.9), (5.4) and the strong Markov property applied at time \(T_{\bar{D}}\) we have \(e_{A,D}(x-x_{i})\inf _{x\notin \bar{D}}P_{x}[H_{\bar{C}}>U]\le P_{x}[\tilde{H}_{\bar{A}}>U]\). But also

$$\begin{aligned}&\sup _{x\notin \bar{D}}P_{x}[H_{\bar{C}}<U]\nonumber \\&\quad \overset{(5.4)}{=}\sup _{x\notin \bar{D}}P_{x}[H_{\bar{C}}<N^{2+\frac{\varepsilon }{100}}]\overset{(2.11)}{\le }nc(\varepsilon )s^{-c\varepsilon }\le c(\varepsilon )s^{-c\varepsilon } \text{ if} \,n\le s^{c\varepsilon },\nonumber \\ \end{aligned}$$
(6.21)

and thus by (2.14) we have \((1-cs^{-c(\varepsilon )})e_{A}(x-x_{i})\le P_{x}[\tilde{H}_{\bar{A}}>U]\) (recall \(s\ge c(\varepsilon )\)). Now

$$\begin{aligned} T_{C}\overset{(5.4)}{\le }U \; \text{ so}\; P_{x}[\tilde{H}_{\bar{A}}>U]\overset{(2.11)}{\le }e_{A,C}(x-x_{i}), \end{aligned}$$

and using (2.15) with \(r=s^{1-\varepsilon }\) and \(\lambda \) such that \(s^{(1-\varepsilon )(1+\lambda )}=s^{1-\frac{\varepsilon }{4}}\) we get \(e_{A,C}(x-x_{i})\le (1+cs^{-c(\varepsilon )})e_{A}(x-x_i)\) (since \(s\ge c(\varepsilon )\)). Thus

$$\begin{aligned} (1-cs^{-c(\varepsilon )})e_{A}(x-x_{i})\le P_{x}[\tilde{H}_{\bar{A}}>U]\le e_{A}(x-x_{i})(1+cs^{-c(\varepsilon )}). \end{aligned}$$
(6.22)

But plugging (6.22) into (6.20), and using (2.6) and (2.13) yields (6.19), cf. below (3.44) in [25]. We omit the details. \(\square \)

7 Poissonization

In this section the goal is to construct the coupling of random walk in the torus with Poisson point processes of intensity a multiple of \(\kappa _{1}\), i.e. prove Proposition 5.1. We recall the standing assumption (5.1).

First we define \(R_{k},k\ge 1\), the successive returns to \(\bar{A}\) (see (5.3) for the notation) and \(U_{k},k\ge 0,\) the successive “departures” from \(\bar{C}\), by

$$\begin{aligned} U_{0}=0,U_{k}=U\circ \theta _{R_{k}}+R_{k},k\ge 1,R_{1}=H_{\bar{A}},R_{k}=H_{\bar{A}}\circ \theta _{U_{k-1}}+U_{k-1},k\ge 2.\nonumber \\ \end{aligned}$$
(7.1)

We call the segments \((Y_{(R_{k}+\cdot )\wedge U_{k}})_{k\ge 1}\) the excursions of the random walk. The first step in the proof of Proposition 5.1 is to couple the random walk \(Y_{\cdot }\) when it starts from the quasistationary distributionwith i.i.d. processes \(\tilde{Y}_{\cdot }^{1},\tilde{Y}_{\cdot }^{2},\ldots \) with law \(P_{\sigma }[Y_{\cdot \wedge U_{1}}\in dw]\), such that with high probability \(Y(U_{i-1},U_{i})\cap \bar{C}=\tilde{Y}^{i}(0,\infty )\cap \bar{C}\). This is done in Lemma 7.1 using Proposition 6.1 from the previous section.

The second step in the proof of Proposition 5.1 is to relate the stopping times \(R_{k},U_{k}\), to deterministic times, roughly speaking showing that \(U_{[un\,\mathrm{cap}(A)]}\approx uN^{d}\). This is done in Lemma 7.4 using large deviation estimates.

The third step in the proof of Proposition 5.1 is to use the relation \(U_{[un\,\mathrm{cap}(A)]}\approx uN^{d}\) to modify the coupling from Lemma 7.1 so that, very roughly, \(Y(0,uN^{d})\cap \bar{C}\approx \cup _{i=1}^{[un\,\mathrm{cap}(A)]}\tilde{Y}^{i}(0,\infty )\cap \bar{C}\) with high probability. This is done in Proposition 7.5, where we also use a mixing argument to ensure that the coupling, as opposed to that from Lemma 7.1, has \(Y_{\cdot }\) starting from the uniform distribution.

Finally at the end of the section we use Proposition 7.5 to prove Proposition 5.1 essentially by constructing a point process from \(\tilde{Y}_{\cdot }^{1},\tilde{Y}_{\cdot }^{2},\ldots Y_{\cdot }^{J}\), where \(J\) is a Poisson random variable. We will see that this gives rise to a Poisson point process which we can modify, using Lemma 6.6 to “change” the intensity from a multiple of \(P_{\sigma }[Y_{\cdot \wedge U}\in dw]\) to a multiple of \(P_{e}[Y_{\cdot \wedge U_{1}}\in dw]\) (i.e. of \(\kappa _{1}\)).

We now state and prove Lemma 7.1 which couples \(Y_{\cdot }\) under \(P_{\sigma }\) with i.i.d. excursions.

Lemma 7.1

(\(d\ge 3\), \(N\ge 3\)) If \(n\le s^{c(\varepsilon )}\) and \(s\ge c(\varepsilon )\) we can construct a coupling \((\Omega _{3},\mathcal{A }_{3},Q_{3})\) of a process \(Y_{\cdot }\) with law \(P_{\sigma }\) and an i.i.d. sequence \(\tilde{Y}_{\cdot }^{1},\tilde{Y}_{\cdot }^{2},\ldots \) with law \(P_{\sigma }[Y_{\cdot \wedge U_{1}}\in dw]\) such that

$$\begin{aligned} Q_{3}[Y(U_{i-1},U_{i})\cap \bar{C}\ne \tilde{Y}^{i}(0,\infty )\cap \bar{C}]\le e^{-cN^{c(\varepsilon )}}\quad \text{ for} \text{ all} \,\,i\ge 1. \end{aligned}$$
(7.2)

Proof

For each \(i\) we define \(L_{i}\), the last time that \(Y_{\cdot }\) leaves \(\bar{C}\) before \(U_{i}\), by

$$\begin{aligned} L_{i}=L\circ \theta _{U_{i-1}}+U_{i-1},i\ge 1,\quad \text{ where} L=\sup \{t\le U_{1}:Y_{t}\in \bar{C}\}, \end{aligned}$$

so that \(0=U_{0}\le L_{1}\le U_{1}\le L_{2}\le U_{2}\le \ldots \). Define for convenience

$$\begin{aligned} \mathcal{L }_{i}=Y_{L_{i}},\mathcal{U }_{i}=Y_{U_{i}}\quad \text{ and} \quad \hat{Y}_{\cdot }^{i}=Y_{(U_{i-1}+\cdot )\wedge L_{i}},i\ge 1. \end{aligned}$$
(7.3)

Note that \(\mathcal{L }_{i}\in \partial _{e}\bar{C}\) almost surely since \(Y_{\cdot }\) is cadlag. Also note that the \(L_{i}\) are not stopping times. However, if we let \(\sigma _{y}(z)=P_{y}[Y_{t^{\star }}=z|H_{\bar{C}}>t^{\star }]\) (cf. (6.3)) we have

Lemma 7.2

For any \(k\ge 1,x\in \mathbb{T }_{N},y\in \partial _{e}\bar{C},z\in \bar{C}^{c},\) and \(E,F\subset \Gamma (\mathbb{T }_{N})\) measurable

$$\begin{aligned} P_{x}[Y_{\cdot \wedge L_{k}}\in E,\mathcal{L }_{k}=y,\mathcal{U }_{k}=z,Y_{U_{k}+\cdot }\in F]=P_{x}[Y_{\cdot \wedge L_{k}}\in E,\mathcal{L }_{k}=y]\sigma _{y}(z)P_{z}[F].\nonumber \\ \end{aligned}$$
(7.4)

Proof

Let \(E^{\prime }=E\cap \{w:w \text{ constant} \text{ eventually},w(\infty )=y\}\) (see below (2.1) for the notation) and \(F^{\prime }=F\cap \{w:w(0)=z\}\), so that the left-hand side of (7.4) equals

$$\begin{aligned} P_{x}[Y_{\cdot \wedge L_{k}}\in E^{\prime },Y_{U_{k}+\cdot }\in F^{\prime }]=\sum \limits _{i\ge 1}P_{x}[Y_{\cdot \wedge \tau _{i}}\in E^{\prime },L_{k}=\tau _{i},Y_{\tau _{i}+t^{\star }+\cdot }\in F^{\prime }],\quad \end{aligned}$$
(7.5)

(note that \(U_{i}\overset{(5.4)}{=}L_{i}+t^{\star }\), and recall (2.2)). Now \(G=\{Y_{\tau _{i-1}}\in \bar{C},U_{k-1}\le \tau _{i}\le U_{k}\}\) is \(\sigma (Y_{\cdot \wedge \tau _{i}})-\)measurable, and \(G\cap \{H_{\bar{C}}\circ \theta _{\tau _{i}}>t^{\star }\}=\{L_{k}=\tau _{i}\}\), so that by the strong Markov property applied at time \(\tau _{i}\) and the definition of \(E^{\prime }\) we have

$$\begin{aligned}&P_{x}[Y_{\cdot \wedge \tau _{i}}\in E^{\prime },L_{k}=\tau _{i},Y_{\tau _{i}+t^{\star }+\cdot }\in F^{\prime }]\\&\quad =P_{x}[\{Y_{\cdot \wedge \tau _{i}}\in E^{\prime }\}\cap G]P_{y}[H_{\bar{C}}>t^{\star },Y_{t^{\star }+\cdot }\in F^{\prime }]. \end{aligned}$$

But \(P_{y}[Y_{t^{\star }+\cdot }\in F^{\prime },H_{\bar{C}}>t^{\star }]=P_{y}[H_{\bar{C}}>t^{\star }]\sigma _{y}(z)P_{z}[F]\) (by the Markov property applied at time \(t^{\star }\) and the definitions of \(\sigma _{y}\) and \(F^{\prime }\)) so in fact

$$\begin{aligned}&P_{x}[Y_{\cdot \wedge \tau _{i}}\in E^{\prime },L_{k}=\tau _{i},Y_{\tau _{i}+t^{\star }+\cdot }\in F^{\prime }] \\&\quad = P_{x}[\{Y_{\cdot \wedge \tau _{i}}\in E^{\prime }\}\cap G]P_{y}[H_{\bar{C}}>t^{\star }]\sigma _{y}(z)P_{z}[F]\\&\quad = P_{x}[\{Y_{\cdot \wedge \tau _{i}}\in E^{\prime }\}\cap G\cap \{H_{\bar{C}}\circ \theta _{\tau _{i}}>t^{\star }\}]\sigma _{y}(z)P_{z}[F]\\&\quad = P_{x}[Y_{\cdot \wedge \tau _{i}}\in E^{\prime },L_{k}=\tau _{i}]\sigma _{y}(z)P_{z}[F], \end{aligned}$$

by an application of the strong Markov property and the definition of \(G\). Plugging this into (7.5) and using the definition of \(E^{\prime }\) gives (7.4). \(\square \)

We now continue with the proof of Lemma 7.1. Because of (6.3) (together with Proposition 4.7, p. 50 in [14]) we can construct for each \(y\in \bar{C}^{c}\) a measure \(q_{y}(\cdot ,\cdot )\) on \(\bar{C}^{c}\times \bar{C}^{c}\) coupling \(\sigma \) and \(\sigma _{y}\), such that

$$\begin{aligned} \text{ the} \text{ first} \text{ marginal} \text{ is} \,\,\sigma (\cdot ), \text{ the} \text{ second} \text{ is} \,\,\sigma _{y}(\cdot ), \text{ and} \,\sum \limits _{z\in \mathbb{T }_{N}}q_{y}(z,z)\ge 1-ce^{-cN^{c(\varepsilon )}}.\nonumber \\ \end{aligned}$$
(7.6)

Let \(q_{y}(\cdot |\cdot )\) denote the conditional distribution of the first argument given the second (note that \(\sigma _{y}(z)>0\) for all \(z\in \bar{C}^{c}\), provided \(s\ge c(\varepsilon )\) so that \(\bar{C}\) consists of disjoint boxes).

We now construct \((\Omega _{3},\mathcal{A }_{3},Q_{3})\) as a space with the following mutually independent families of random variables

$$\begin{aligned}&Y_{\cdot } \text{ with} \text{ law} \,\,P_{\sigma },\end{aligned}$$
(7.7)
$$\begin{aligned}&(V_{y,z,i})_{y,z\in \bar{C}^{c},i\ge 1} \text{ independent} \,\bar{C}^{c}-\text{ valued,} \text{ where} \,\,V_{y,z,i} \text{ has} \text{ law} \,q_{y}(dw|z),\end{aligned}$$
(7.8)
$$\begin{aligned}&(Z_{\cdot }^{v,i})_{v\in \bar{C}^{c},i\ge 1} \text{ independent} \,\Gamma (\mathbb{T }_{N})-\text{ valued,} \text{ where} \,Z_{\cdot }^{v,i} \text{ has} \text{ law} \,\,P_{v}[\hat{Y}_{\cdot \wedge L_{1}}^{1}\in dw],\nonumber \\ \end{aligned}$$
(7.9)

(recall (7.3) for the definition of \(\hat{Y}_{\cdot \wedge L_{1}}^{1}\)). We define on \(\Omega _{3}\) starting points of excursions \(\Sigma _{i}\) by

$$\begin{aligned} \Sigma _{i}=V_\mathcal{L _{i},\mathcal{U }_{i},i}\quad \text{ for} \,i\ge 1.\quad \end{aligned}$$
(7.10)

We will see that the \(\Sigma _{i}\) are i.i.d. with law \(\sigma \), but coincide with the \(\mathcal{U }_{i}\) with high probability. Furthermore define the excursions \(\bar{Y}^{i}\), with starting points \(\Sigma _{i-1}\) for \(i\ge 2\), by

$$\begin{aligned} \bar{Y}_{\cdot }^{1}=\hat{Y}_{\cdot }^{1} \text{ and} \,\,\bar{Y}_{\cdot }^{i+1}= \left\{ \begin{array}{ll} \hat{Y}_{\cdot }^{i+1}&\quad \text{ if} \,\mathcal{U }_{i}=\Sigma _{i}\\ Z_{\cdot }^{\Sigma _{i},i}&\quad \text{ if} \,\mathcal{U }_{i}\ne \Sigma _{i} \end{array}\right. \text{ for} \,i\ge 1. \end{aligned}$$
(7.11)

(By a slight abuse of notation we view \(\hat{Y}_{\cdot }^{i},\mathcal{U }_{i},\mathcal{L }_{i}\) and \(Y_{\cdot }\) as being defined on \(\Omega _{3}\) as well as on \(\Gamma (\mathbb{T }_{N})\)). We will see that the \(\bar{Y}_{\cdot }^{i},i\ge 1,\) are i.i.d. with law \(P_{\sigma }[Y_{\cdot \wedge L_{1}}\in dw]\), essentially because the starting points \(\Sigma _{i}\) are i.i.d. with law \(\sigma \). Furthermore we will see that \(\bar{Y}_{\cdot }^{i+1}\) coincides with \(\hat{Y}_{\cdot }^{i+1}\) with high probability, because \(\Sigma _{i}\) coincides with \(\mathcal{U }_{i}\) (the starting point of \(\hat{Y}_{\cdot }^{i+1}\)) with high probability. To this end note that for all \(i\ge 2\)

$$\begin{aligned} Q_{3}[\bar{Y}_{\cdot }^{i}=\hat{Y}_{\cdot }^{i}]&\mathop {=}\limits ^{(7.10),(7.11)} Q_{3}[\mathcal{U }_{i-1}=V_\mathcal{L _{i-1},\mathcal{U }_{i-1},i-1}]\\&\,\;\;\;\!\quad =\sum \limits _{y,z\in \mathbb{T }_{N}}Q_{3}[\mathcal{L }_{i-1}=y,\mathcal{U }_{i-1}=z,V_{y,z,i-1}=z]. \end{aligned}$$

Now by independence and (7.8) we have \(Q_{3}[\mathcal{L }_{i-1}=y,\mathcal{U }_{i-1}=z,V_{y,z,i-1}=z]=P_{\sigma }[\mathcal{L }_{i-1}=y,\mathcal{U }_{i-1}=z]q_{y}(z|z)\). Furthermore \(P_{\sigma }[\mathcal{L }_{i-1}=y,\mathcal{U }_{i-1}=z]=P_{\sigma }[\mathcal{L }_{i-1}=y]\sigma _{y}(z)\) by (7.4) so

$$\begin{aligned}&Q_{3}[\bar{Y}_{\cdot }^{i}=\hat{Y}_{\cdot }^{i}]\nonumber \\&\quad =\sum \limits _{y\in \mathbb{T }_{N}}\{P_{\sigma }[\mathcal{L }_{i-1}=y] \sum \limits _{z\in \mathbb{T }_{N}}\sigma _{y}(z)q_{y}(z|z)\} \mathop {\ge }\limits ^{(7.6)}1-ce^{-cN^{c(\varepsilon )}} \text{ for} \text{ all} \,\,i\ge 1,\quad \end{aligned}$$
(7.12)

where we have used that \(\sigma _{y}(z)q_{y}(z|z)=q_{y}(z,z)\) and that \(\bar{Y}_{\cdot }^{1}=\hat{Y}_{\cdot }^{1}\) almost surely. The next lemma will be used to to show that the \(\bar{Y}_{\cdot }^{i}\) are i.i.d. with law \(P_{\sigma }[\hat{Y}_{\cdot \wedge L_{1}}^{1}\in dw]\).

Lemma 7.3

For any measurable \(E_{1},\ldots .,E_{k}\subset \Gamma (\mathbb{T }_{N})\), let \(F_{k}=\{\bar{Y}_{\cdot }^{i}\in E_{i},1\le i\le k\}\). Then for all \(y,z\in \bar{C}^{c}\) and measurable \(F\subset \Gamma (\mathbb{T }_{N})\) we have

$$\begin{aligned} Q_{3}[F_{k}\cap \left\{ \mathcal{L }_{k}=y,\mathcal{U }_{k}=z,Y_{U_{k}+\cdot }\in F\right\} ]=Q_{3}[F_{k}\cap \left\{ \mathcal{L }_{k}=y\right\} ]\sigma _{y}(z)P_{z}[F],\nonumber \\ \end{aligned}$$
(7.13)

Proof

Essentially speaking (7.13) follows directly from (7.4) because the event \(F_{k}\) only depends on \(Y_{\cdot \wedge L_{k}}\), \(Z_{\cdot }^{v,i}\) and \(V_{y,z,i}\) for \(i\le k\) (see (7.11)), where the \(Z_{\cdot }^{v,i}\) and \(V_{y,z,i}\) are independent of \(Y_{U_{k}+\cdot }\) by construction. We omit the details (which involve “conditioning on \(\mathcal{L }_{i},\mathcal{U }_{i},\Sigma _{i},i\le k-1\)”, i.e. considering \(Q_{3}[F_{k}\cap \left\{ \mathcal{L }_{k}=y,\mathcal{U }_{k}=z,Y_{U_{k}+\cdot }\in F\right\} \cap K(\bar{y},\bar{z},\bar{v})]\), where \(K(\bar{y},\bar{z},\bar{v})\!=\!\{(\mathcal{L }_{i})_{i=1}^{k-1}\!=\! \bar{y},(\mathcal{U }_{i})_{i=1}^{k-1}=\bar{z},(V_{\bar{y}_{i}, \bar{z}_{i},i})_{i=1}^{k-1}=\bar{v}\}\) for vectors \(\bar{y},\bar{z},\bar{v}\) in \((\bar{C}^{c})^{k-1}\)). \(\square \)

We now continue with the proof of Lemma 7.1 by showing that the \(\bar{Y}_{\cdot }^{i}\) are i.i.d with law \(P_{\sigma }[Y_{\cdot \wedge L_{1}}\in dw]\). For any measurable \(E_{1},\ldots ,E_{k},E_{k+1}\subset \Gamma (\mathbb{T }_{N})\) let \(F_{k}\) be defined as in Lemma 7.3, let \(F=\{Y_{\cdot \wedge L_{1}}\in E_{k+1}\}\) and note that by (7.11), \(Q_{3}[F_{k}\cap \{\bar{Y}_{\cdot }^{k+1}\in E_{k+1}\}]\) equals

$$\begin{aligned}&\sum \limits _{y,z\in \bar{C}^{c}}Q_{3}[F_{k}\cap \left\{ \mathcal{L }_{k}=y,\mathcal{U }_{k}=z,V_{y,z,k}=z,Y_{U_{k}+\cdot }\in F\right\} ]\end{aligned}$$
(7.14)
$$\begin{aligned}&\quad +\sum \limits _{y,z,v\in \bar{C}^{c},v\ne z}Q_{3}[F_{k}\cap \{\mathcal{L }_{k}=y,\mathcal{U }_{k}=z,V_{y,z,k}=v,Z_{\cdot }^{v,k}\in F\}]. \end{aligned}$$
(7.15)

By independence and (7.8) we have that the probability in (7.14) equals

$$\begin{aligned}&Q_{3}[F_{k}\cap \left\{ \mathcal{L }_{k}=y,\mathcal{U }_{k}=z,Y_{U_{k}+\cdot }\in F\right\} ]q_{y}(z|z)\\&\quad \mathop {=}\limits ^{(7.13)}Q_{3}[F_{k}\cap \left\{ \mathcal{L }_{k}=y\right\} ]\sigma _{y}(z)P_{z}[F]q_{y}(z|z), \end{aligned}$$

and similarly by independence, (7.8) and (7.9), the probability in (7.15) equals

$$\begin{aligned}&Q_{3}[F_{k}\cap \left\{ \mathcal{L }_{k}=y,\mathcal{U }_{k}=z\right\} ] q_{y}(v|z)P_{v}[F]\\&\quad \mathop {=}\limits ^{(7.13)} Q_{3}[F_{k}\cap \left\{ \mathcal{L }_{k}=y\right\} ]\sigma _{y}(z)q_{y}(v|z)P_{v}[F]. \end{aligned}$$

But \(\sigma _{y}(z)P_{z}[F]q_{y}(z|z)=q_{y}(z,z)P_{z}[F]\) and \(\sigma _{y}(z)q_{y}(v|z)P_{v}[F]=q_{y}(v,z)P_{v}[F]\), so

$$\begin{aligned} Q_{3}[F_{k}\cap \{\bar{Y}_{\cdot }^{k+1}\in E_{k+1}\}]&= \sum \limits _{y,z,v\in \bar{C}^{c}}Q_{3}[F_{k}\cap \left\{ \mathcal{L }_{k}=y\right\} ]q_{y}(v,z)P_{v}[F]\\&= Q_{3}[F_{k}]P_{\sigma }[F]=Q_{3}[F_{k}]P_{\sigma }[Y_{\cdot \wedge L_{1}}\in E_{k+1}], \end{aligned}$$

where we have used (7.6) and the definition of \(F\). But applying this recursively we get that for all \(k\ge 1\) and measurable \(E_{1},\ldots ,E_{k+1}\subset \Gamma (\mathbb{T }_{N})\)

$$\begin{aligned} Q_{3}[\bar{Y}_{\cdot }^{i}\in E_{i},1\le i\le k+1]= \prod _{i=1}^{k+1} P_{\sigma }[Y_{\cdot \wedge L_{1}}\in E_{i}], \end{aligned}$$

and thus the \(\bar{Y}_{\cdot }^{i},i\ge 1,\) are i.i.d. with the same law as \(Y_{\cdot \wedge L_{1}}\) under \(P_{\sigma }\).

We can now extend the space \((\Omega _{3},\mathcal{A }_{3},Q_{3})\) by independently “appending a piece” with law \(P_{y}[Y_{\cdot \wedge t^{\star }}\in dw|H_{\bar{C}}>t^{\star }]\) to each \(\bar{Y}_{\cdot }^{i}\), conditionally on the event \(\{\bar{Y}_{\infty }^{i}=y\}\) for every \(y\in \partial _{e}\bar{C}\), to obtain i.i.d. processes \(\tilde{Y}_{\cdot }^{1},\tilde{Y}_{\cdot }^{2},\ldots \) with law \(P_{\sigma }[Y_{\cdot \wedge U_{1}}\in dw]\) such that \(\tilde{Y}_{\cdot }^{i}(0,\infty )\cap \bar{C}=\bar{Y}_{\cdot }^{i} (0,\infty )\cap \bar{C}\) almost surely. Then (7.2) is satisfied by (7.12) and since \(\hat{Y}_{\cdot }^{i}(0,\infty )\cap \bar{C}=Y(U_{i-1},U_{i})\cap \bar{C}\). This completes the proof of Lemma 7.1. \(\square \)

The next step is to to relate the stopping times \(U_{k}\) to deterministic times.

Lemma 7.4

(\(d\ge 3\), \(N\ge 3\)) If \(s\ge c(\varepsilon )\), \(u\ge s^{-c(\varepsilon )}\), \(\frac{1}{2}\ge \delta \ge cs^{-c(\varepsilon )}\) and \(n\le s^{c(\varepsilon )}\) then

$$\begin{aligned} P_{\sigma }[U_{[u(1+\delta )n\,\mathrm{cap}(A)]}\le uN^{d}]&\le ce^{-s^{c(\varepsilon )}}\quad \text{ and} \end{aligned}$$
(7.16)
$$\begin{aligned} P_{\sigma }[U_{[u(1-\delta )n\,\mathrm{cap}(A)]}\ge uN^{d}]&\le ce^{-s^{c(\varepsilon )}}. \end{aligned}$$
(7.17)

Proof

Note that

$$\begin{aligned} U_{k}=\sum \limits _{i=0}^{k-1}H_{\bar{A}}\circ \theta _{U_{i}}+\sum \limits _{i=1}^{k}U\circ \theta _{R_{i}}\quad \text{ for} \text{ all} \,\,k\ge 0. \end{aligned}$$
(7.18)

Let \(k^{+}=[u(1+\delta )n\,\mathrm{cap}(A)]\) and \(k^{-}=[u(1-\delta )n\,\mathrm{cap}(A)]\). By (7.18) both (7.16) and (7.17) follow from (note that \(\delta uN^c,\delta ^{2}un\text{ cap}(A) \mathop {\ge }\limits ^{(2.7)}s^{c(\varepsilon )}\) if \(u,\delta \ge s^{-c(\varepsilon )}\))

$$\begin{aligned} P_{\sigma }\Big [\sum \limits _{i=0}^{k^{+}-1}H_{\bar{A}}\circ \theta _{U_{i}}\le uN^{d}\Big ]&\le ce^{-c\delta ^{2}un\text{ cap}(A)},\end{aligned}$$
(7.19)
$$\begin{aligned} P_{\sigma }\Big [\sum \limits _{i=0}^{k^{-}-1}H_{\bar{A}}\circ \theta _{U_{i}}\ge u(1-\frac{\delta }{2})N^{d}\Big ]&\le e^{-c\delta ^{2}un\text{ cap}(A)},\end{aligned}$$
(7.20)
$$\begin{aligned} P_{\sigma }\Big [\sum \limits _{i=1}^{k^{-}}U\circ \theta _{R_{i}}\ge \frac{\delta }{2}uN^{d}\Big ]&\le ce^{-c\delta uN^c}. \end{aligned}$$
(7.21)

One shows (7.19), (7.20) and (7.21) using large deviations bounds. Since the proofs are very similar to those in Lemma 4.3 in [25] we omit the details. Let us simply state that to prove (7.19) one estimates the small exponential moments of \(\delta \frac{\sum _{i=0}^{k^{+}-1}H_{\bar{A}} \circ \theta _{U_{i}}}{\inf _{x\notin \bar{C}}E_{x}[H_{\bar{A}}]}\) and to prove (7.20) one estimates the small exponential moments of \(\delta \frac{\sum _{i=0}^{k^{-}-1}H_{\bar{A}}\circ \theta _{U_{i}}}{\sup _{x\in \mathbb{T }_{N}}E_{x}[H_{\bar{A}}]}.\) This is possible because by the strong Markov property \(E_{\sigma }[\exp (\lambda \sum _{i=0}^{k}H_{\bar{A}}\circ \theta _{U_{k}})]\le (\sup _{x\notin \bar{C}}E_{x}[\exp (\lambda H_{\bar{A}})])^{k}\) for all \(\lambda \in \mathbb{R }\), and by elementary bounds on the function \(x\rightarrow e^{x}\) and Khasminskii’s lemma (Lemma 3.8 in [25]) we have

$$\begin{aligned}&\sup _{x\notin \bar{C}}E_{x}[\exp (\lambda H_{\bar{A}})]\le 1+\lambda \inf _{x\notin \bar{C}}E_{x}[H_{\bar{A}}]+c\lambda ^{2}(\sup _{x\in \mathbb{T }_{N}}E[H_{\bar{A}}])^{2}\quad \text{ for} \lambda <0\nonumber \\\end{aligned}$$
(7.22)
$$\begin{aligned}&\text{ and} \sup _{x\notin \bar{C}}E_{x}[\exp (\lambda H_{\bar{A}})]\le \sum _{m\ge 0}\lambda ^{m}(\sup _{x\in \mathbb{T }_{N}}E_{x}[H_{\bar{A}}])^{m}\quad \text{ for} \lambda >0. \end{aligned}$$
(7.23)

To show (7.19) one sets \(\lambda =-\frac{c\delta }{\inf _{x\notin \bar{C}}E_{x}[H_{\bar{A}}]}\) in (7.22), for a small enough constant \(c\), uses (6.6) to show that the term involving \(\lambda ^{2}\) is at most \(c\delta ^{2}\) (for a small enough constant \(c\)) and (6.4) and (6.6) to show that \(\frac{N^{d}}{\inf _{x\notin \bar{C}}E_{x}[H_{\bar{A}}]}\le (1+c\delta )n\text{ cap}(A)\), for any small constant \(c\), as long as we require \(\delta \ge c(\varepsilon )s^{-c(\varepsilon )}\). To show (7.20) one sets \(\lambda =\frac{c\delta }{\sup _{x\in \mathbb{T }_{N}}E_{x}[H_{\bar{A}}]}>0\) in (7.23), for a small enough constant \(c\), and uses (6.5) and (6.6) to show that \(\frac{N^{d}}{\sup _{x\in \mathbb{T }_{N}}E_{x}[H_{\bar{A}}]}\ge (1-c\delta )n\text{ cap}(A)\), for any small constant \(c\), as long as we require \(\delta \ge c(\varepsilon )s^{-c(\varepsilon )}\). (Note that (6.5) and (6.6) hold because we require \(n\le s^{c(\varepsilon )}\).)

To prove (7.21) one estimates \(E_{\sigma }[\exp (\lambda \sum _{i=1}^{k^{-}}U\circ \theta _{R_{i}})]\) for \(\lambda =(t^{\star })^{-1}\), first by similarly bounding it above by \((\sup _{x\in \bar{C}}E_{x}[\exp (\lambda U)])^{k^{-}}\). By noting that if \(U\le T_{\bar{D}}+t^{\star }\) does not hold, then \(H_{\bar{C}}\circ \theta _{T_{\bar{D}}}\le t^{\star }\) and \(U\le U\circ \theta _{H_{\bar{C}}}\circ \theta _{T_{\bar{D}}}+T_{\bar{D}}+t^{\star }\) (recall (5.4) and (6.12)), we obtain the inequality

$$\begin{aligned}&\sup _{x\in \bar{C}}E_{x}[\exp (\lambda U)]\nonumber \\&\quad \le \sup _{x\in \bar{C}}E_{x}[\exp (\lambda (T_{\bar{D}}+t^{\star })]\left(1+\sup _{x\in \bar{C}}E_{x}[\exp (\lambda U)]\sup _{x\notin \bar{D}}P_{x}[H_{\bar{C}}<U]\right).\nonumber \\ \end{aligned}$$
(7.24)

Using once again Khasminskii’s lemma and the elementary \(\sup _{x\in \mathbb{T }_{N}}E_{x}[T_{\bar{D}}]=\sup _{x\in \mathbb{T }_{N}}E_{x}[T_{D}]\le cs^{2(1-\frac{\varepsilon }{8})}\le ct^{\star }\) (recall that \(D\) has radius \(s^{1-\frac{\varepsilon }{8}}\), \(s\le N\) and (5.4)) one obtains \(\sup _{x\in \bar{C}}E_{x}[\exp (\lambda (T_{\bar{D}}+t^{\star }))]\le e^{c\lambda t^{\star }}=c\), since \(\lambda t^{\star }=1\). Using this inequality together with (6.21) in (7.24), and rearranging terms, one obtains that \(\sup _{x\in \bar{C}}E_{x}[\exp (\lambda U)]\le e^{c\lambda t^{\star }}\) (provided \(s\ge c(\varepsilon )\)). To prove (7.21) using the exponential Chebyshev one must then check that \(ct^{\star }k^{-} - \frac{\delta }{2}uN^{d} \le -cu\delta N^d\) for \(s\ge c(\varepsilon )\), which follows by noting that \(\text{ cap}(A)\le N^{(1-\varepsilon )(d-2)}\) (see (2.7)), using (5.4) and requiring \(\delta \ge s^{-c(\varepsilon )}\) and \(n\le s^{c(\varepsilon )}\) for small enough exponents \(c(\varepsilon )\). \(\square \)

We can now combine Lemma 7.1 and Lemma 7.4 to construct a coupling of a random walk \(Z_{\cdot }\) with law \(P\) and a sequence of i.i.d. excursions with law \(P_{\sigma }[Y_{\cdot \wedge U_{1}}\in dw]\) such that, roughly speaking, \(Z(0,uN^{d})\cap \bar{A}\) coincides with the traces of the i.i.d. excursions.

Proposition 7.5

(\(d\ge 3\), \(N\ge 3\)) If \(s\ge c(\varepsilon )\), \(u\ge s^{-c(\varepsilon )}\), \(\frac{1}{2}\ge \delta \ge cs^{-c(\varepsilon )}\) and \(n\le s^{c(\varepsilon )}\), then we can construct a coupling \((\Omega _{4},\mathcal A _{4},Q_{4})\) of a process \(Z_{\cdot }\) with law \(P\) and an i.i.d. sequence \(\tilde{Y}_{\cdot }^{1},\tilde{Y}_{\cdot }^{2},\ldots \) with law \(P_{\sigma }[Y_{\cdot \wedge U_{1}}\in dw]\) such that

$$\begin{aligned}&Q_{4}\left[\overset{[u(1-\delta )n\text{ cap}(A)]}{\underset{i=2}{\cup }}\tilde{Y}^{i}(0,\infty )\!\cap \!\bar{C}\!\subset \! Z(0,uN^{d})\!\cap \bar{C}\!\subset \!\overset{[u(1+\delta )n\,\mathrm{cap}(A)]}{\underset{i=1}{\bigcup }}\tilde{Y}^{i}(0,\infty )\cap \bar{C}\right]\nonumber \\&\qquad \qquad \qquad \ge 1-c(\varepsilon )ue^{-cs^{c(\varepsilon )}}. \end{aligned}$$
(7.25)

(Note that the first union is over \(i\ge 2\), see the remark after the proof.)

Proof

We first use Lemma 7.1 to construct the space \((\Omega _{3},\mathcal A _{3},Q_{3})\). We will now extend it to get \((\Omega _{4},\mathcal A _{4},Q_{4})\). By (2.4) (with \(\lambda =N^{-2}t^{\star }\mathop {=}\limits ^{(5.4)}N^{\frac{\varepsilon }{100}}\)) and a standard coupling argument we can construct a process \(Z_{\cdot }\) with law \(P\) such that \(Z_{\cdot }\) agrees with \(Y_{t^{\star }+\cdot }\) with probability at least \(1-ce^{-N^{c(\varepsilon )}}\), and in particular

$$\begin{aligned} Q_{4}[Z(0,uN^{d})=Y(t^{\star },uN^{d}+t^{\star })] \ge 1-ce^{-cN^{c(\varepsilon )}}. \end{aligned}$$
(7.26)

Now letting \(k^{-}=[u(1-\delta )n\text{ cap}(A)]\) and \(k^{+}=[u(1+\delta )n\text{ cap}(A)]\) we have

$$\begin{aligned} Q_{4}[Y(U_{1},U_{k^{-}})\subset Y(t^{\star },uN^{d}+t^{\star })\subset Y(0,U_{k^{+}})]\ge 1-ce^{-s^{c}}\!, \end{aligned}$$
(7.27)

since \(U_{1}\mathop {\ge }\limits ^{(7.1)} U\circ \theta _{R_{1}}\mathop {\ge }\limits ^{(5.4)}t^{\star }\), \(Q_{4}[U_{k^{-}}\le uN^{d}]\ge 1-ce^{-s^{c}}\) by (7.17), and \(uN^{d}+t^{\star }\overset{u,\delta \ge N^{-c}}{\le }N^d u(1+\frac{\delta }{4})<U_{[u(1+\frac{\delta }{4})^{2}n\,\mathrm{cap}(A)]}\le U_{k^{+}}\) with probability at least \(1-ce^{-s^{c}}\) by (7.16) (applied with \(u(1+\frac{\delta }{4})\) in place of \(u\) and \(\frac{\delta }{4}\) in place of \(\delta \)). Finally by (7.2) we have

$$\begin{aligned}&Q_{4}\left[\bigcup \limits ^{k^{-}}_{i=2} \,\tilde{Y}^{i}(0,\infty )\cap \bar{C}=Y(U_{1},U_{k^{-}})\cap \bar{C},Y(0,U_{k^{+}})\cap \bar{C}= \bigcup \limits ^{k^{+}}_{i=1}\, \tilde{Y}^{i}(0,\infty )\cap \bar{C}\right]\\&\qquad \qquad \ge 1-cue^{-cs^{c(\varepsilon )}}\!, \end{aligned}$$

where we have used that \(k^{+}\le cun\text{ cap}(A)\mathop {\le }\limits ^{(2.7)} cus^{c(\varepsilon )}s^{(1-\varepsilon )(d-2)}\le cus^{c(\varepsilon )}\). Combining this with (7.26) and (7.27) now gives (7.25) (using also that \(u\ge s^{-c(\varepsilon )}\)). \(\square \)

Note that to ensure that also the first excursion \(\tilde{Y}_{\cdot }^{1}\) has law \(P_{\sigma }\), we have generated the law \(P\) of \(Z\) by modifying \(Y_{t^{\star }+\cdot }\) under \(P_{\sigma }\), and getting the i.i.d. excursions \(\tilde{Y}_{\cdot }^{i},i\ge 1,\) from \(Y_{\cdot }\) via Lemma 7.1. The first union is (7.25) is over \(i\ge 2\), since with this construction \(\tilde{Y}_{\cdot }^{1}\) corresponds to a piece of \(Y_{\cdot }\) that is not fully included in \(Z_{\cdot }\).

We are ready to finish the proof of Proposition 5.1 using the previous Proposition 7.5.

Proof of Proposition 5.1

We apply Proposition 7.5 with \(\frac{\delta }{4}\ge cs^{-c(\varepsilon )}\) in place of \(\delta \) to construct the space \((\Omega _{4},\mathcal A _{4},Q_{4})\) which we will extend to get \((\Omega _{2},\mathcal A _{2},Q_{2})\). First of all we rename the process \(Z_{\cdot }\) to \(Y_{\cdot }\), so that we have from (7.25) that

$$\begin{aligned} \begin{array}{ll}&Q_{4}\left[\bigcup \limits ^{[u(1-\frac{\delta }{4})n\,\mathrm{cap}(A)]}_{i=2} \tilde{Y}^{i}(0,\infty )\cap \bar{C}\subset Y(0,uN^{d})\cap \bar{C}\subset \bigcup \limits ^{[u (1+\frac{\delta }{4})n\text{ cap}(A)]}_{i=1}\tilde{Y}^{i}(0,\infty )\cap \bar{C}\right]\\&\qquad \qquad \qquad \ge 1-c(\varepsilon )ue^{-cs^{c(\varepsilon )}}, \end{array}\nonumber \\ \end{aligned}$$
(7.28)

where \(\tilde{Y}_{1},\tilde{Y}_{2},\ldots \) are i.i.d. with law \(P_{\sigma }[Y_{\cdot \wedge U_{1}}\in dw]\). We now add independent Poisson random variables \(J_{1}\) and \(J_{2}\) to the space, where \(J_{1}\) has parameter \(u(1-\frac{\delta }{2})n\text{ cap}(A)\), \(J_{2}\) has parameter \(u\delta n\text{ cap}(A)\) and define the following point processes on \(\Gamma (\mathbb{T }_{N})\)

$$\begin{aligned} \mu _{1}^{^{\prime }}=\sum \limits _{i=2}^{J_{1}+1}\delta _{\tilde{Y}_{H_{\bar{A}}+\cdot }^{i}}\quad \text{ and}\quad \mu _{2}^{^{\prime }}=1_{\{J_{2}\ne 0\}}\delta _{\tilde{Y}_{H_{\bar{A}}+\cdot }^{1}}+\sum \limits _{i=J_{1}+2}^{J_{1}+J_{2}}\delta _{\tilde{Y}_{H_{\bar{A}}+\cdot }^{i}}. \end{aligned}$$
(7.29)

Then \(\mu _{1}^{^{\prime }}\) and \(\mu _{2}^{^{\prime }}\) are independent Poisson point processes such that \(\mu _{1}^{^{\prime }}\) has intensity \(u(1-\frac{\delta }{2})n\text{ cap}(A)P_{q}[Y_{\cdot \wedge U}\in dw]\) and \(\mu _{2}^{^{\prime }}\) has intensity \(u\delta n\text{ cap}(A)P_{q}[Y_{\cdot \wedge U}\in dw]\), where \(q\) denotes the measure defined by \(q(x)=P_{\sigma }[Y_{H_{\bar{A}}}=x]\). By a standard large deviations bound one can show that

$$\begin{aligned} Q_{4}\Big [J_{1}+1&\le \Big [u\Big (1-\frac{\delta }{4}\Big )n \mathrm{cap}(A) \Big ]\le \Big [u\Big (1+ \frac{\delta }{4} \Big )n\mathrm{cap}(A)\Big ]\le J_{1}+J_{2}\Big ]\\&\ge 1-ce^{-cun\text{ cap}(A)\delta ^{2}}\!, \end{aligned}$$

so that from \(u\text{ cap}(A)\delta ^{2} \mathop {\ge }\limits ^{(2.7),u,\delta \ge s^{-(1-\varepsilon )\frac{d-2}{8}}}s^{(1-\varepsilon )\frac{d-2}{2}}\), (7.28) and (7.29) it follows that

$$\begin{aligned} Q_{4}[\mathcal{I }(\mu _{1}^{^{\prime }})\cap \bar{A}\subset Y(0,uN^{d})\cap \bar{A}\subset \mathcal{I }(\mu _{1}^{^{\prime }}+\mu _{2}^{^{\prime }})\cap \bar{A})\ge 1-cue^{-cs^{c(\varepsilon )}}\!.\quad \end{aligned}$$
(7.30)

By (6.19) we have the following inequality involving the intensity of \(\mu _{1}^{^{\prime }}\)

$$\begin{aligned} u(1-\delta )\kappa _{1}\overset{\delta \ge cs^{-c(\varepsilon )}}{\le }u\Big (1- \frac{\delta }{2}\Big )n\;\mathrm{cap}(A)P_{q}[Y_{\cdot \wedge U}\in dw]\overset{\delta \ge cs^{-c(\varepsilon )}}{\le }u\Big (1- \frac{\delta }{3}\Big )\kappa _{1}.\nonumber \\ \end{aligned}$$
(7.31)

Using the lower bound we can thus construct, by means of a standard thinning procedure, a Poisson point process \(\mu _{1}\) of intensity \(u(1-\delta )\kappa _{1}\), such that \(\mu _{1}\le \mu _{1}^{^{\prime }}\) and \(\mu _{1}\) and \(\mu _{1}^{^{\prime }}-\mu _{1}\) are independent (by placing each point \(x \in \Gamma (\mathbb{T }_n)\) of \(\mu _1^{\prime }\) in \(\mu _1\) independently with probability \(\frac{u(1-\delta )\kappa _1(x)}{u(1-\delta /2)n\mathrm{cap}(A)P_{q}[Y_{\cdot \wedge U} = x]} \in [0,1]\), where we extend the space with the appropriate Bernoulli random variables). Using the upper bound we can furthermore thicken \(\mu _{1}^{^{\prime }}-\mu _{1}\) to get a Poisson point process \(\nu \) (independent of \(\mu _{1}\)) of intensity \(u\frac{2}{3}\delta \kappa _{1}\) such that \(\mu _{1}^{^{\prime }}\le \mu _{1}+\nu \) (by extending the space with with an independent Poisson point process of intensity \(u(1-\delta /3)\kappa _1 - u(1-\delta /2)n\mathrm{cap}(A)P_q[Y_{\cdot \wedge U} \in dw] \ge 0\) and adding this process to \(\mu _1^{\prime }-\mu _1\) to form \(\nu \)).

Once again by (6.19) we also have the following inequality involving the intensity of \(\mu _{2}^{^{\prime }}\)

$$\begin{aligned} u\delta n\;\mathrm{cap}(A)P_{q}[Y_{\cdot \wedge U}\in dw]\overset{s\ge c(\varepsilon )}{\le }u\, \frac{4}{3}\, \delta \kappa _{1}. \end{aligned}$$

Thus we can (as above) thicken \(\mu _{2}^{^{\prime }}\) to get a Poisson point process \(\eta \) of intensity \(u\frac{4}{3}\delta \kappa _{1}\), such that \(\mu _{2}^{^{\prime }}\le \eta \) and \(\mu _{1},\nu ,\eta \) are independent. We then define \(\mu _{2}=\nu +\eta \), and see that \(\mu _{2}\) is a Poisson point process of intensity \(u2\delta \kappa _{1}\) which is independent from \(\mu _{1}\), and \(\mu _{1}\le \mu _{1}^{^{\prime }}\le \mu _{1}^{^{\prime }}+\mu _{2}^{^{\prime }}\le \mu _{1}+\mu _{2}\). Thus it follows from (7.30) that the probability of the event in (5.6) is at least \(1-cue^{-s^{c(\varepsilon )}}\) and the proof of Proposition 5.1 is complete. \(\square \)

To prove Theorem 3.2 it now remains to show Proposition 5.2 and Proposition 5.4.

8 From the torus to \(\mathbb{Z }^{d}\), and decoupling boxes

In this section we prove Proposition 5.2 which dominates a Poisson point process \(\nu \) of intensity a multiple of \(\kappa _{1}\) (and whose “excursions” therefore roughly speaking “feel that they are in the torus”, and may visit several of the boxes \(A_{1},\ldots ,A_{n}\), see (5.4) and (5.5)), from above and below by Poisson point processes whose intensities are multiples of \(\kappa _{2}\) (and whose “excursions” thus, conditionally on their starting point, “behave” like random walk in \(\mathbb{Z }^{d}\) stopped upon leaving a box, and visit only a single box, see (5.7)). First we will (roughly speaking) take all excursions in \(\nu \) that never return to \(\bar{A}\) after leaving \(\bar{B}\) (the great majority of all excursions), truncate them upon leaving \(\bar{B}\), and collect them into a Poisson point process whose intensity can bounded from above and below by multiples of \(\kappa _{2}\) (see Lemma 8.1). This will allow us to dominate this Poisson point process, from above and below, by Poisson point processes with intensities a multiple of \(\kappa _{2}\). We will then, in Lemma 8.2, use an argument from the proof of Theorem 2.1 in [24] to dominate the excursions that do return to \(\bar{A}\) after leaving \(\bar{B}\) by a Poisson point process with intensity a small multiple of \(\kappa _{2}\). That is, we will, essentially speaking, decouple the “successive visits to \(\bar{A}\) (after leaving \(\bar{B}\))” of a single excursion by dominating the hitting distribution on \(\bar{A}\) when starting outside of \(\bar{B}\) by a multiple of the measure \(e\) from (5.5) (see Lemma 8.3). We then collect a number of the successive visits of all the excursions (with high probability all the successive visits of all the excursions, and in addition a number of extra independent visits) into a Poisson point process of intensity a small multiple of \(\kappa _{2}\). This is done in Lemma 8.2. Though the number of excursions that make several “successive returns” to \(\bar{A}\) are small, dominating them with high enough probability (namely, stretched exponential in the separation \(s\), see the statement of Proposition 5.2 and (8.4)), so that what happens in the individual boxes \(A_{1},A_{2},\ldots ,A_{n},\) is independent, is not straight-forward. Since Lemma 8.2 achieves this, it should be considered the heart of the proof of Proposition 5.2.

We recall the standing assumption (5.1). Define for \(w\in \Gamma (\mathbb{T }_{N})\) the successive returns \(\hat{R}_{k}=\hat{R}_{k}(w)\) to \(\bar{A}\) and departures \(\hat{D}_{k}=\hat{D}_{k}(w)\) from \(\bar{B}\) as follows

$$\begin{aligned} \hat{R}_{1}=H_{\bar{A}},\,\,\hat{R}_{k}=H_{\bar{A}}\circ \theta _{\hat{D}_{k-1}}+\hat{D}_{k-1},k\ge 2,\,\,\hat{D}_{k}=T_{\bar{B}}\circ \theta _{\hat{R}_{k}}+\hat{R}_{k},k\ge 1.\nonumber \\ \end{aligned}$$
(8.1)

Note that \(\hat{R}_{k}\) should not be confused with the \(R_{k}\) used in Sect. 7 (see (7.1)). To extract the “successive visits to \(\bar{A}\)” of an excursion we furthermore define for each \(i\ge 1\) the map \(\phi _{i}\) from \(\Gamma (\mathbb{T }_{N})\) into \(\Gamma (\mathbb{T }_{N})^{i}\) by

$$\begin{aligned} (\phi _{i}(w))_{j}\!=\!w((\hat{R}_{j}\!+\!\cdot )\wedge \hat{D}_{j})&\quad \text{ for} \,j\!=\!1,\ldots ,i,w\!\in \!\{\hat{R}_{i}\!<\!U\!<\!\hat{R}_{i+1}\}\!\subset \!\Gamma ( \mathbb{T }_{N}),i\!\ge \!1.\nonumber \\ \end{aligned}$$
(8.2)

For each \(i\ge 1\) we will apply this map to the Poisson point processes \(1_{\{\hat{R}_{i}<U<\hat{R}_{i+1}\}}\nu \) to get a Poisson point processes \(\mu _{i}\) of intensity \(u\kappa _{1}^{i}\), where

$$\begin{aligned} \kappa _{1}^{i}=\phi _{i}\circ (1_{\{\hat{R}_{i}<U<\hat{R}_{i+1}\}} \kappa _{1}),\!i\ge 1. \end{aligned}$$
(8.3)

To dominate the \(\mu _{1}\), the first of these Poisson point processes, which contains most excursions, from above and below by Poisson point processes of intensities that are multiples of \(\kappa _{2}\) we will use the following inequality.

Lemma 8.1

(\(d\ge 3\), \(N\ge 3\)) If \(s\ge c(\varepsilon )\) and \(n\le s^{c(\varepsilon )}\) we have \((1-cs^{-c(\varepsilon )})\kappa _{2}\le \kappa _{1}^{1}\le \kappa _{2}.\)

We postpone the proof of Lemma 8.1 until after the proof of Proposition 5.2. The Poisson point processes \(\mu _{2},\mu _{3},\ldots \) will contain the successive visits to \(\bar{A}\) of the excursions that make \(2,3,\ldots \) such visits, respectively. There will only be a few of these and using the following lemma we will be able to dominate them by a Poisson point process \(\theta \) of intensity a small multiple of \(\kappa _{2}\).

Lemma 8.2

\((d\ge 3, N\ge 3)\) Let \((\Omega ,\mathcal A ,Q)\) be a probability space with independent Poisson point processes \(\mu _{2},\mu _{3},\ldots \) such that \(\mu _{i}\) has intensity \(u\kappa _{1}^{i}, u\ge 0\). Then if \(1 \ge \delta \ge s^{-c(\varepsilon )}\), \(n\le s^{c(\varepsilon )}\) and \(s\ge c(\varepsilon )\) we can construct a space \((\Omega ^{\prime },\mathcal A ^{\prime },Q^{\prime })\) and, on the product space \(\Omega \times \Omega ^{\prime }\), a \(\sigma (\mu _{i},i\ge 2)\times \mathcal A ^{\prime }-\)measurable Poisson point process \(\theta \) of intensity \(u\delta \kappa _{2}\) such that (recalling the notation from (2.16))

$$\begin{aligned} Q\otimes Q^{\prime }[\cup _{i\ge 2}\mathcal{I }(\mu _{i})\subset \mathcal{I }(\theta )]\ge 1-ce^{-cu\delta cap(A)}. \end{aligned}$$
(8.4)

We postpone the proof of Lemma 8.2 until later, and instead use it together with Lemma 8.1 to prove Proposition 5.2.

Proof of Proposition 5.2

We let

$$\begin{aligned} \mu _{i}=\phi _{i}(1_{\{\hat{R}_{i}<U<\hat{R}_{i+1}\}}\nu ),i\ge 1, \text{ so} \text{ that}\,\,\cup _{i\ge 1}\mathcal{I }(\mu _{i})\cap \bar{A} \mathop {=}\limits ^{(8.1),(8.2)}\mathcal{I }(\nu )\cap \bar{A}.\quad \end{aligned}$$
(8.5)

Since the sets \(\{\hat{R}_{i}<U<\hat{R}_{i+1}\},i\ge 1,\) are disjoint \(\mu _{i},i\ge 1,\) are independent Poisson point processes on the respective spaces \(\Gamma (\mathbb{T }_{N})^{i},i\ge 1\), and by (8.3) they have respective intensities \(u\kappa _{1}^{i},i\ge 1\). By Lemma 8.1 it follows, since we require \(\delta \ge cs^{-c(\varepsilon )}\), that

$$\begin{aligned} u(1-\delta )\kappa _{2}\le u\kappa _{1}^{1}\le u\kappa _{2}. \end{aligned}$$
(8.6)

Now similarly to how we used (7.31) to construct the processes \(\mu _{1}\) and \(\nu \) from \(\mu ^{\prime }\), we now (extending our space appropriately) use (8.6) to construct processes \(\nu _{1}\) and \(\rho \), such that \(\nu _{1},\rho ,\mu _{i},i\ge 2,\) are independent, \(\nu _{1}\) has intensity \(u(1-\delta )\kappa _{2}\), \(\rho \) has intensity \(u\delta \kappa _{2}\) and

$$\begin{aligned} \nu _{1}\le \mu _{1}\le \nu _{1}+\rho \text{ almost} \text{ surely.} \end{aligned}$$
(8.7)

Thus (5.8) holds, because \(\mathcal{I }(\nu _{1})\cap \bar{A}\mathop {\subset }\limits ^{(8.7)}\mathcal{I }(\mu _{1})\cap \bar{A}\overset{(8.5)}{\subset }\mathcal{I }(\nu )\cap \bar{A}\), and since \(\nu _{1}\) has intensity \(u(1-\delta )\kappa _{2}\) it now suffices to construct \(\nu _{2}\) appropriately.

To this end we apply Lemma 8.2, once again extending the space, to get a Poisson point process \(\theta \) of intensity \(u\delta \kappa _{2}\) such that \(\nu _{1},\rho \) and \(\theta \) are independent and

$$\begin{aligned} Q[\cup _{i\ge 2}\mathcal{I }(\mu _{i})\subset \mathcal{I }(\theta )]\ge 1-ce^{-cu\delta \,\mathrm{cap}(A)}\ge 1-ce^{-cs^{c(\varepsilon )}}\!, \end{aligned}$$
(8.8)

where we use that we require \(u,\delta \ge s^{-c(1-\varepsilon )} = s^{-c(\varepsilon )}\) so that \(u\delta \,{\mathrm{cap}}(A)\mathop {\ge }\limits ^{(2.7),(5.2)} u\delta s^{(1-\varepsilon )(d-2)}\ge s^{c(\varepsilon )}\). Now set \(\nu _{2}=\rho +\theta \) and note that \(\nu _{1}\) and \(\nu _{2}\) are independent, \(\nu _{2}\) has intensity \(2u\delta \kappa _{2}\) and because of (8.5), (8.7) and (8.8) we have

$$\begin{aligned} Q[\mathcal{I }(\nu )\cap \bar{A} \subset \mathcal{I }(\nu _{1}+\nu _{2})]\ge 1-ce^{-cs^{c(\varepsilon )}}\!. \end{aligned}$$

Thus the proof of Proposition 5.2 is complete. \(\square \)

The proof of Proposition 5.2 has thus been reduced to Lemma 8.1 and Lemma 8.2. We now prove Lemma 8.1.

Proof of Lemma 8.1

Let \(W\subset \Gamma (\mathbb{T }_{N})\) be measurable. Then \(\kappa _{1}^{1}(W)\!\!\overset{(5.5),(8.3)}{=}P_{e}[Y_{\cdot \wedge T_{\bar{B}}}\in W,\,\,U<\hat{R}_{2}]\) so the upper bound follows directly from (5.7). Furthermore \(\kappa _{1}^{1}(W)\ge \kappa _{2}(W)\inf _{x\in \partial _{e}\bar{B}}P_{x}[H_{\bar{A}}>T_{\bar{D}}]\inf _{x\in \partial _{e}\bar{D}}P_{x}[H_{\bar{A}}>U]\) (recall (6.12)) by the strong Markov property. But (if \(s\ge c(\varepsilon )\)) \(\inf _{x\in \partial _{e}\bar{B}}P_{x}[H_{\bar{A}}>T_{\bar{D}}]=\inf _{x\in \partial _{e}B}P_{x}^{\mathbb{Z }^{d}}[H_{A}>T_{D}]\overset{(2.10)}{\ge }1-cs^{-c(\varepsilon )}\), and by (6.21) we have \(\inf _{x\in \partial _{e}\bar{D}}P_{x}[H_{\bar{A}}>U]\ge 1-c(\varepsilon )s^{-c(\varepsilon )}\), so the lower bound follows. \(\square \)

It thus only remains to prove Lemma 8.2 to complete the proof of Proposition 5.2. For this we will use the following bound on the intensities \(\kappa _{1}^{i}\) of the \(\mu _{i}\), (this corresponds to (2.33) in [24]).

Lemma 8.3

(\(d\ge 3\), \(N\ge 3\)) If \(s\ge c(\varepsilon )\) and \(n\le s^{c(\varepsilon )}\) then for all \(i\ge 2\)

$$\begin{aligned} \kappa _{1}^{i}\le \tilde{\kappa }_{1}^{i} \text{ where} \,\tilde{\kappa }_{1}^{i}(d(w_{1},\ldots ,w_{i}))=s^{-\frac{\varepsilon }{8}(i-1)}\text{ cap}(A)\otimes _{k=1}^{i}P_{\bar{e}}[Y_{\cdot \wedge T_{\bar{B}}}\in dw_{k}],\nonumber \end{aligned}$$
(8.9)

and \(\bar{e}=\frac{e}{n\,\mathrm{cap}(A)}\) denotes the normalisation of the measure \(e\) from (5.5) (see (2.6)).

In the proof of Lemma 8.2 we will use Lemma 8.3 to dominate \(\mu _{i},i\ge 2,\) by Poisson point processes \(\eta _{i}\) of intensity \(u\tilde{\kappa }_{1}^{i}\). Since \(\tilde{\kappa }_{1}^{i}\) is proportional to a product measure the “points” of \(\eta _{i}\) will be vectors of independent excursions with law \(P_{\bar{e}}[Y_{\cdot \wedge T_{\bar{B}}}\in dw]\). Thus we will have “decoupled” the excursions and we will be able to use them (along with additional independent excursions) to construct the Poisson point process \(\theta \). We postpone the proof of Lemma 8.3 until after the proof of Lemma 8.2. In the proof of Lemma 8.2 we will use the following simple lemma about Poisson random variables.

Lemma 8.4

Let \(N\) be a Poisson random variable of intensity \(\lambda >0\), and let \(N_{i},i\ge 2,\) be independent Poisson random variables such that \(N_{i}\) has intensity at most \(\lambda r^{i-1}\). Then

$$\begin{aligned} \mathbb P \Big [\sum \limits _{i\ge 2}iN_{i}\le N\Big ]\ge 1-ce^{-c\lambda },\quad \text{ if} \,\,0<r\le c. \end{aligned}$$
(8.10)

Proof

This follows from the standard Chebyshev bounds \(\mathbb P [N<\frac{\lambda }{2}] \le \mathbb E [e^{-\frac{1}{2}N}]e^{\frac{\lambda }{4}}\le e^{-\frac{\lambda }{8}}\) and

$$\begin{aligned} \mathbb P \Big [\sum \limits _{l\ge 2}iN_{i}\!>\! \frac{\lambda }{2} \Big ]\!\le \! e^{-\frac{\lambda }{2}}\mathbb E [e^{\sum _{i\ge 2}iN_{i}}]\! \le \! e^{-\frac{\lambda }{2}\!+\!\lambda \sum _{i\ge 2}r^{i-1}(e^{i}-1)} \mathop {\le }\limits ^{re\le c<1} e^{-\frac{\lambda }{4}}\quad \text{ for} \text{ all}\,\,\lambda \!>\!0. \end{aligned}$$

\(\square \)

We now prove Lemma 8.2. The proof corresponds roughly to (2.38)–(2.54) in [24].

Proof of Lemma 8.2

If we multiply (8.9) by \(u\) we get for each \(i\ge 2\) an inequality for the intensity measure of \(\mu _{i}\). Because of this inequality we can “thicken” each \(\mu _{i}\), by constructing \((\Omega ^{\prime },\mathcal A ^{\prime },Q^{\prime })\) with the appropriate random variables, to get (on \(\Omega \times \Omega ^{\prime }\)) \(\sigma (\mu _{i},i\ge 2)\times \mathcal A ^{\prime }-\)measurable independent Poisson point processes

$$\begin{aligned} \eta _{i},i\ge 2, \text{ on} \,\Gamma (\mathbb{T }_{N})^{i} \text{ of} \text{ intensities} \,u\tilde{\kappa }_{1}^{i} \text{(respectively),} \text{ such} \text{ that} \,\mu _{i}\le \eta _{i},i\ge 2,\nonumber \\ \end{aligned}$$
(8.11)

(analogously to below (7.31)). We note that if we let \(N_{i}=\eta _{i}(\Gamma (\mathbb{T }_{N})^{i}),i\ge 2,\) then (see (8.9))

$$\begin{aligned} N_{i},i\ge 2, \text{ are} \text{ independent} \text{ and} \text{ Poisson,} \text{ where} \,N_{i} \text{ has} \text{ intensity}\,\,us^{-\frac{\varepsilon }{8}(i-1)}\text{ cap}(A).\nonumber \\ \end{aligned}$$
(8.12)

Now extend \((\Omega ^{\prime },\mathcal A ^{\prime },Q^{\prime })\) to obtain \(\sigma (\mu _{i},i\ge 2)\times \mathcal A ^{\prime }-\)measurable vectors \(v_{j}^{i},i\ge 2,j\ge 1\), such that \(v_{j}^{i},i\ge 2,j\ge 1,N_{i},i\ge 1,\) are independent,

$$\begin{aligned} v_{j}^{i} \text{ has} \text{ law} \,\otimes _{k=1}^{i}P_{\bar{e}}[Y_{\cdot \wedge T_{\bar{B}}}\in dw_{k}] \text{(i.e.} \,u\tilde{\kappa }_{1}^{i} \text{ normalised)} \text{ and}\,\eta _{i}=\sum \limits _{j=1}^{N_{i}}\delta _{v_{j}^{i}},i\ge 2,\nonumber \\ \end{aligned}$$
(8.13)

(conditionally on \(\eta _{i}\), we order the \(N_{i}\) points in the support of \(\eta _{i}\) according to say the time until the first jump of the first of the \(i\) paths that make up a point of \(\eta _{i}\), and let \(v_{1}^{i},\ldots ,v_{N_{i}}^{i},\) be a permutation of these points chosen uniformly at random; we then add i.i.d. vectors to form \(v_{j}^{i},j>N_{i}\)). Define \(\bar{N}=\sum _{i\ge 2}iN_{i}\) and construct on \((\Omega ^{\prime },\mathcal A ^{\prime },Q^{\prime })\) a Poisson random variable \(N\) of intensity \(u\delta n\text{ cap}(A)\), and trajectories \(\tilde{w}_{i},i\ge 1,\) with law \(P_{\bar{e}}[Y_{\cdot \wedge T_{\bar{B}}}\in dw]\), such that \(N,\tilde{w}_{i},i\ge 1,v_{j}^{i},i\ge 2,j\ge 1,N_{i},i\ge 2,\) are independent. Write \(v_{j}^{i}=(w_{j,1}^{i},\ldots ,w_{j,i}^{i})\) and let

$$\begin{aligned} \theta = \left\{ \begin{array}{ll} \sum \limits _{i=2}^{\infty }\sum _{j=1}^{N_{i}}(\delta _{w_{j,1}^{i}}+\cdots +\delta _{w_{j,i}^{i}})+\sum \limits _{i=1}^{N-\bar{N}}\delta _{\tilde{w}_{i}}&\quad \text{ if} \,N\ge \bar{N},\\ \sum \limits _{i=1}^{N}\delta _{\tilde{w}_{i}}&\quad \text{ if} \,N<\bar{N}. \end{array}\right. \end{aligned}$$
(8.14)

The number of points \(N\) of \(\theta \) is a Poisson random variable, and conditionally on \(N\) the points of \(\theta \) are i.i.d. with law \(P_{\bar{e}}[Y_{\cdot \wedge T_{\bar{B}}}\in dw]\) (see (8.13)), so that \(\theta \) is (as claimed) a \(\sigma (\mu _{i},i\ge 2)\times \mathcal A ^{\prime }-\)measurable Poisson point process of intensity \(u\delta n\text{ cap}(A)P_{\bar{e}}[Y_{\cdot \wedge T_{\bar{B}}}\in dw]=u\delta \kappa _{2}\) (see (5.7)).

It remains to show (8.4). We have \(\cup _{i\ge 2}\mathcal{I }(\mu _{i})\subset \cup _{i\ge 2}\mathcal{I }(\eta _{i})\) (by (8.11)) and on the event \(\{\bar{N}\le N\}\) we have \(\cup _{i\ge 2}\mathcal{I }(\eta _{i})\subset \mathcal{I }(\theta )\) by (8.13) and (8.14). Thus by (8.10) with \(\lambda =u\delta n\text{ cap}(A)\) and \(r=\delta ^{-1}s^{-\frac{\varepsilon }{8}}\le c\) (we require \(\delta \ge cs^{-\varepsilon /8}\)) we get

$$\begin{aligned} Q\otimes Q^{\prime }[\cup _{i\ge 2}\mathcal{I }(\mu _{i})\subset \mathcal{I }(\theta )]\ge Q\otimes Q^{\prime }[\bar{N}\le N]\ge 1-ce^{-cu\delta \text{ cap(A)}}, \end{aligned}$$
(8.15)

(see (8.12) and note that \(us^{-\frac{\varepsilon }{8}(i-1)}\text{ cap}(A)\le \lambda \delta ^{-1}s^{-\frac{\varepsilon }{8}(i-1)}\le \lambda r^{i-1}\)). Thus (8.4) holds. \(\square \)

It remains to prove Lemma 8.3. For the proof we will need the following upper bound on the probability of hitting \(\bar{A}\) in a given point before \(U\), from outside of \(\bar{B}\).

Lemma 8.5

(\(s\ge c(\varepsilon ),n\le s^{c(\varepsilon )}\)) For all \(x\in \partial _{e}\bar{B}\) and \(y\in \partial _{i}\bar{A}\) we have

$$\begin{aligned} h_{x}(y)\le s^{-\frac{\varepsilon }{4}}\bar{e}(y) \text{ where}\,h_{x}(y)=P_{x}[H_{\bar{A}}<U,Y_{H_{\bar{A}}}=y]. \end{aligned}$$
(8.16)

Note that crucially \(s^{-\frac{\varepsilon }{4}}\bar{e}(y)\) does not depend on the starting point \(x\). In the proof of Lemma 8.3, which we now start, this is what will allow us to bound \(\kappa _{1}^{i}\) from above by an intensity that is proportional to a product measure (namely \(\tilde{\kappa }_{1}^{i}\)). The proof of Lemma 8.5 will follow after the proof of Lemma 8.3.

Proof of Lemma 8.3

Fix a \(i\ge 2\). Let \(W_{1},\ldots ,W_{i}\subset \Gamma (\mathbb{T }_{N})\) be measurable. Then

$$\begin{aligned} \kappa _{1}^{i}(W_{1}\times \cdots \times W_{i})&\!\mathop {=}\limits ^{(5.5),(8.3)} P_{e}[\hat{R}_{i}<U<\hat{R}_{i+1},Y_{(\hat{R}_{j}+\cdot ) \wedge \hat{D}_{j}}\in W_{j},1\le j\le i]\\&\le \!\!P_{e}[\hat{R}_{i}<U,Y_{(\hat{R}_{j}+\cdot )\wedge \hat{D}_{j}}\in W_{j},1\le j\le i]\\&\mathop {=}\limits ^{(8.1),(8.17)}_{\mathrm{Markov}} E_{e}[1_{\{\hat{R}_{i-1}<U,Y_{(\hat{R}_{j}+\cdot )\wedge \hat{D}_{j}}\in W_{j},1\le j\le i-1\}}P_{h_{Y_{\hat{D}_{i-1}}}}[Y_{\cdot \wedge T_{\bar{B}}}\in W_{j}]],\\&\mathop {\le }\limits ^{(8.16)} s^{-\frac{\varepsilon }{4}}P_{e}[\hat{R}_{i-1}<U,Y_{(\hat{R}_{j}+\cdot )\wedge \hat{D}_{j}}\in W_{j},1\le j\le i-1]P_{\bar{e}}[Y_{\cdot \wedge T_{\bar{B}}}\in W_{i}]. \end{aligned}$$

Now iterating a similar inequality we get

$$\begin{aligned} \kappa _{1}^{i}(W_{1}\times \cdots \times W_{i})&\le s^{-\frac{\varepsilon }{4}(i-1)}P_{e}[Y_{\cdot \wedge T_{\bar{B}}}\in W_{1}]\prod _{j=2}^{i} P_{\bar{e}}[Y_{\cdot \wedge T_{\bar{B}}}\in W_{j}]\\&\le s^{-\frac{\varepsilon }{8}(i-1)}\mathrm{cap}(A) \prod _{j=1}^{i} P_{\bar{e}}[Y_{\cdot \wedge T_{\bar{B}}}\in W_{j}]\mathop {=}\limits ^{(8.9)}\tilde{\kappa }_{1}^{i}(W_{1}\times \cdots \times W_{i}), \end{aligned}$$

where we have used that \(s^{-\frac{\varepsilon }{4}(i-1)}P_{e}[Y_{\cdot \wedge T_{\bar{B}}}\in W_{1}]=s^{-\frac{\varepsilon }{4}(i-1)}n\,\text{ cap}(A)P_{\bar{e}}[Y_{\cdot \wedge T_{\bar{B}}}\in W_{1}]\) and \(s^{-\frac{\varepsilon }{4}(i-1)}n\le s^{-\frac{\varepsilon }{8}(i-1)}\) (we require \(n\le s^{\frac{\varepsilon }{8}}\)).

Thus \(\kappa _{1}^{i}(W)\le \tilde{\kappa }_{1}^{i}(W)\) for all \(W\in \Gamma (\mathbb{T }_{N})^{i}\) that are products of measurable sets. This implies that \(\kappa _{1}^{i}(W)\le \tilde{\kappa }_{1}^{i}(W)\) for all \(W\in \Gamma (\mathbb{T }_{N})^{i}\) that are finite unions of such sets (\(W\) need not be a disjoint union, since “overlapping” unions of products of measurable sets may be turned into disjoint unions of such sets by further “subdividing” the “overlapping” sets). By a monotone class argument, this implies that \(\kappa _{1}^{i}(W)\le \tilde{\kappa }_{1}^{i}(W)\) for all measurable \(W\in \Gamma (\mathbb{T }_{N})^{i}\) (see Theorem 3.4, p. 39 in [7]), so (8.9) follows. \(\square \)

Finally, we prove Lemma 8.5, using the Harnack inequality and (2.8).

Proof of Lemma 8.5

If \(j\ne k\), \(x\in \partial _{e}B_{j}\) and \(y\in \partial _{i}A_{k}\) then by the Markov property

$$\begin{aligned} h_{x}(y)=P_{x}[H_{\bar{A}}<U,Y_{H_{\bar{A}}}=y] \le \sup _{x\in \partial _{e}B_{k}}P_{x}[H_{\bar{A}}<U, Y_{H_{\bar{A}}}=y]=\sup _{x\in \partial _{e}B_{k}}h_{x}(y),\nonumber \\ \end{aligned}$$
(8.17)

(provided \(s\ge c(\varepsilon )\) so that \(B_{j}\) and \(B_{k}\) are disjoint), so without loss of generality we may assume \(x\in \partial _{e}B_{k}\). We have (recall from (6.12) that \(C\subset D=B(0,s^{1-\frac{\varepsilon }{8}})\))

$$\begin{aligned} h_{x}(y)\le P_{x}[H_{\bar{A}}<T_{\bar{D}},Y_{H_{\bar{A}}}=y] +P_{x}[T_{\bar{D}}<H_{\bar{A}}<U,Y_{H_{\bar{A}}}=y]. \end{aligned}$$
(8.18)

By the strong Markov property, (6.21) and (8.17), we have for \(s\ge c(\varepsilon )\)

$$\begin{aligned} P_{x}[T_{\bar{D}}\!<\!H_{\bar{A}}\!<\!U,Y_{H_{\bar{A}}}\!=\!y]\!\le \!\sup _{z\in \partial _{e}\bar{D}}P_{z}[H_{\bar{C}}\!<\!U]\sup _{z\in \partial _{e}\bar{B}}h_{z}(y)\le cs^{-c(\varepsilon )}\sup _{z\in \partial _{e}B_{k}}h_{z}(y).\nonumber \\ \end{aligned}$$
(8.19)

The function \(z\rightarrow h_{z}(y)\) is non-negative harmonic on \(C_{k}\backslash A_{k}\) (which can be identified with a subset of \(\mathbb{Z }^{d}\)) so by the Harnack inequality (Proposition 1.7.2, p. 42, [12]) and a standard covering argument we have \(h_{z}(y)\le ch_{x}(y)\) for all \(z\in \partial _{e}B_{k}\). Applying this inequality to the right-hand side of (8.19), plugging the result into (8.18) and rearranging we find that

$$\begin{aligned} h_{x}(y)\le cP_{x}[H_{\bar{A}}<T_{\bar{D}},Y_{H_{\bar{A}}} \!=\!y]\le c\sup _{z\in \partial _{i}B}P_{z}^{\mathbb{Z }^{d}}[H_{A}\!<\!\infty ,Y_{H_{A}}\!=\!y]\quad \text{ for} \,s\ge c(\varepsilon ).\nonumber \\ \end{aligned}$$
(8.20)

Thus using (2.8) with \(K=A,r=s^{1-\varepsilon }\) (recall (3.3)) we have that if \(s\ge c(\varepsilon )\) (so that \(z\notin B(0,c_3 s^{1-\varepsilon })\) if \(z\in \partial _{i}B\)) then

$$\begin{aligned} h_{x}(y)\le c_2 \frac{e_{A}(y)}{\mathrm{cap}(A)} \sup _{z\in \partial _{i}B}P_{z}^{\mathbb{Z }^{d}}[H_{A}<\infty ] \mathop {\le }\limits ^{(2.10)}cn\bar{e}(y)s^{-\frac{\varepsilon }{2}} \mathop {\le }\limits ^{n\le s^{\frac{\varepsilon }{8}},s\ge c(\varepsilon )}\bar{e}(y)s^{-\frac{\varepsilon }{4}}. \end{aligned}$$

\(\square \)

Now all components used in the proof of Proposition 5.2 have been proved.

9 Coupling with random interlacements

In this section we prove Proposition 5.4. We use essentially the same techniques that were used to prove Proposition 5.2 in the previous section, but speaking very roughly we use them “in reverse” to reconstruct from the excursions in the Poisson point process \(\eta \) of intensity \(u\kappa _{3}\), which all end upon leaving \(B\), excursions with law \(P_{e_{A}}^{\mathbb{Z }^{d}}\) (or rather, the successive visits to \(A\) after departures from \(B\) of such excursions). \(P_{e_{A}}^{\mathbb{Z }^{d}}\) gives positive measure to excursions that return to \(A\) even after leaving \(B\), and to construct such excursions we will in Lemma 9.2 take a “small number” of excursions from \(\eta \) and “glue them together”, essentially reversing the argument from Lemma 8.2.

Let \(\tilde{R}_{1}\le \tilde{D}_{1}\le \tilde{R}_{2}\le \tilde{D}_{2}\le \cdots \) on \(\Gamma (\mathbb{Z }^{d})\) denote the successive returns of \(Y_{\cdot }\) to \(A\) and successive departures from \(B\),

$$\begin{aligned} \tilde{R}_{1}=H_{A},\,\,\tilde{R}_{k}=H_{A}\circ \theta _{\tilde{D}_{k-1}}+\tilde{D}_{k-1},k\ge 2,\,\, \tilde{D}_{k}=T_{B}\circ \theta _{\tilde{R}_{k}}+\tilde{R}_{k},k\ge 1. \end{aligned}$$

These should not be confused with the \(\hat{R}_{k},\hat{D}_{k},\) which were defined on \(\Gamma (\mathbb{T }_{N})\) and used in Sect. 8 (see (8.1)), or the \(R_{k}\) from Sect. 7 (see (7.1)). Furthermore similarly to (8.2) define maps \(\phi _{i}^{\mathbb{Z }^{d}},i\ge 2,\) from \(\Gamma (\mathbb{Z }^{d})\) to \(\Gamma (\mathbb{Z }^{d})^{i}\) extracting the excursions between \(A\) and \(B\),

$$\begin{aligned} (\phi _{i}^{\mathbb{Z }^{d}}(w))_{j}\!=\!w((\tilde{R}_{j}\!+\!\cdot ) \wedge \tilde{D}_{j})\quad \text{ for} j\!=\!1,\ldots ,i, w\!\in \!\{\tilde{R}_{i} \!<\!\infty \!=\!\tilde{R}_{i+1}\}\!\subset \!\Gamma (\mathbb{Z }^{d}),i\!\ge \!1.\nonumber \\ \end{aligned}$$
(9.1)

To construct the random set \(\mathcal{I }_{1}\) from the statement of Proposition 5.4 we will construct Poisson point processes of intensities \(u(1-\delta )\kappa _{4}^{i}\), where (cf. (8.3))

$$\begin{aligned} \kappa _{4}^{i}=\phi _{i}\circ (1_{\{\tilde{R}_{i}<\infty = \tilde{R}_{i+1}\}}P_{e_{A}}^{\mathbb{Z }^{d}}),\quad i\ge 1. \end{aligned}$$
(9.2)

This will be enough to construct \(\mathcal{I }_{1}\) because if \(\mu _{i},i\ge 1\), are i.i.d. Poisson point processes of intensity \(u(1-\delta )\kappa _{4}^{i}\) then by (2.18), (2.19), (9.1) and (9.2) (recalling the notation from (2.16))

$$\begin{aligned} \mathcal{I }^{u(1-\delta )}\cap A\mathop {=}\limits ^\mathrm{law}\cup _{i\ge 1}\mathcal{I }(\mu _{i})\cap A. \end{aligned}$$
(9.3)

To construct a Poisson point process \(\mu _{1}\) of intensity \(u(1-\delta )\kappa _{4}^{1}\) we will, in the proof of Proposition 5.4 , “extract” a Poisson point process of intensity \(u(1-\delta )\kappa _{3}\) from \(\eta \), and “thin” it to get \(\mu _{1}\). This will be possible because of the following inequality.

Lemma 9.1

(\(N\ge 3,d\ge 3\)) If \(s\ge c(\varepsilon )\) then \(\kappa _{4}^{1}\le \kappa _{3}\le (1+cs^{-c(\varepsilon )})\kappa _{4}^{1}\).

Proof

This is a consequence of (2.10), (5.10) and (9.2). The proof is a very similar to that of Lemma 8.1, so we omit it. \(\square \)

After constructing \(\mu _{1}\) in the proof of Proposition 5.4 we will take “what is left of \(\eta \) after thinning using Lemma 9.1”, namely a Poisson point process \(\theta \) of intensity \(u\delta \kappa _{3}\), and “extract from it” the Poisson point processes \(\mu _{2},\mu _{3},\ldots \) of respective intensities \(u(1-\delta )\kappa _{4}^{i}\). This will be done using the following lemma.

Lemma 9.2

\((d\ge 3,N\ge 3)\) Let \(u\ge 0\), \(\delta \ge cs^{-c(\varepsilon )}\) and \(s\ge c(\varepsilon )\), and let \((\Omega ,\mathcal A ,Q)\) be a probability space with a Poisson point processes \(\theta \) of intensity \(u\delta \kappa _{3}\). Then we can construct a space \((\Omega ^{\prime },\mathcal A ^{\prime },Q^{\prime })\) and, on the product space, independent \(\sigma (\theta )\times \mathcal A ^{\prime }-\)measurable Poisson point processes \(\mu _{i},i\ge 2,\) and \(\rho _{2}\) such that \(\mu _{i}\) has intensity \(u(1-\delta )\kappa _{4}^{i}\), \(\rho _{2}\) has intensity \(u\frac{3\delta }{2}\kappa _{3}\) and

$$\begin{aligned} Q\otimes Q^{\prime }\left[\bigcup \limits _{i\ge 2}\; \mathcal{I }(\mu _{i})\subset \mathcal{I }(\theta )\subset \cup _{i\ge 2}\mathcal{I }(\mu _{i})\, \bigcup \;\mathcal{I }(\rho _{2})\right]\ge 1-ce^{-cu\delta \text{ cap(A)}}.\qquad \end{aligned}$$
(9.4)

The “residual” Poisson point process \(\rho _{2}\), as well as a “residual” Poisson point process left after “thinning” to obtain \(\mu _{1}\), will be used to construct \(\mathcal{I }_{2}\). We postpone the proof of Lemma 9.2 until later, and instead use it to prove Proposition 5.4.

Proof of Proposition 5.4

We start by constructing \((\Omega ^{\prime },\mathcal A ^{\prime },Q^{\prime })\) appropriately, to obtain, by “thinning” \(\eta \), a \(\sigma (\eta )\times \mathcal A ^{\prime }-\)measurable Poisson point process \(\theta \) on the product space of intensity \(u\delta \kappa _{3}\), such that \(\eta -\theta \) and \(\theta \) are independent, and \(\eta -\theta \) is a Poisson point process of intensity \(u(1-\delta )\kappa _{3}\) (similarly to below (7.31), note that of course \(u\delta \kappa _3 \le u\kappa _3\)).

Since we require \(\delta \ge cs^{-c(\varepsilon )}\) and \(s\ge c(\varepsilon )\) we have \(u(1-\delta )(1+s^{-c(\varepsilon )})\le u(1-\frac{\delta }{2})\), and thus \(u(1-\delta )\kappa _{4}^{1}\le u(s1-\delta )\kappa _{3}\le u(1-\frac{\delta }{2})\kappa _{4}^{1}\) by Lemma 9.1. But \(u(1-\frac{\delta }{2})\kappa _{4}^{1}\le u(1-\delta )\kappa _{4}^{1}+\frac{\delta }{2}\kappa _{3}\) since \(\kappa _{4}^{1}\le \kappa _{3}\), so that

$$\begin{aligned} u(1-\delta )\kappa _{4}^{1}\le u(1-\delta )\kappa _{3}\le u(1-\delta )\kappa _{4}^{1}+u \frac{\delta }{2} \,\kappa _{3}. \end{aligned}$$

Therefore we can, similarly to under (8.6), construct (extending \((\Omega ^{\prime },\mathcal A ^{\prime },Q^{\prime })\) appropriately) \(\sigma (\eta )\times \mathcal A ^{\prime }-\)measurable Poisson point processes \(\mu _{1}\) and \(\rho _{1}\) such that \(\mu _{1},\rho _{1},\theta \) are independent, \(\mu _{1}\) has intensity \(u(1-\delta )\kappa _{4}^{1}\), \(\rho _{1}\) has intensity \(u\frac{\delta }{2}\kappa _{3}\),

$$\begin{aligned} \mu _{1}\le \eta -\theta \le \mu _{1}+\rho _{1}, \text{ and} \text{ thus} \,\,\mathcal{I }(\mu _{1})\subset \mathcal{I }(\eta -\theta ) \subset \mathcal{I }(\mu _{1})\bigcup \mathcal{I }(\rho _{1}). \end{aligned}$$
(9.5)

We then apply Lemma 9.2 (once again extending \((\Omega ^{\prime },\mathcal A ^{\prime },Q^{\prime })\)) to \(\theta \) to get \(\sigma (\eta )\times \mathcal A ^{\prime }-\)measurable Poisson point processes \(\mu _{i},i\ge 2,\rho _{2}\) such that \(\rho _{1},\rho _{2},\mu _{i},i\ge 1,\) are independent, \(\mu _{i}\) has intensity \(u(1-\delta )\kappa _{4}^{i}\), \(\rho _{2}\) has intensity \(u\frac{3\delta }{2}\kappa _{3}\),

$$\begin{aligned} Q\otimes Q^{\prime }\Big [ \bigcup _{i\ge 2} \; \mathcal{I }(\mu _{i})\subset \mathcal{I }(\theta )\subset \cup _{i\ge 2}\mathcal{I }(\mu _{i})\; \bigcup \; \mathcal{I }(\rho _{2})\Big ]\ge 1-ce^{-cu\delta cap(A)}.\qquad \end{aligned}$$
(9.6)

Note that \(\rho _{1}+\rho _{2}\) is a Poisson point process of intensity \(2u\delta \kappa _{3}\), and that the “points” of this process are pieces of random walk with law \(\frac{1}{\text{ cap}(A)}P_{e_{A}}[Y_{\cdot \wedge T_{B}}\in dw]\) (recall (2.6)). By constructing countably many independent random walks on \((\Omega ^{\prime },\mathcal A ^{\prime },Q^{\prime })\), and “attaching” a different one to each piece of random walk in \(\rho _{1}+\rho _{2}\) we obtain a Poisson point process \(\rho _{3}\) of intensity \(2u\delta P_{e_{A}}\) (the “points” of \(\rho _{3}\) have law \(\frac{1}{\mathrm{cap}(A)}P_{e_{A}}\), by the strong Markov property) such that

$$\begin{aligned} \mathcal{I }(\rho _{1}+\rho _{2})\subset \mathcal{I }(\rho _{3}) \text{ almost} \text{ surely.} \end{aligned}$$
(9.7)

Now let

$$\begin{aligned} \mathcal{I }_{1}=\cup _{i\ge 1}\mathcal{I }(\mu _{i})\cap A\quad \text{ and} \quad \mathcal{I }_{2}=\mathcal{I }(\rho _{3})\cap A \end{aligned}$$

and note that \(\mathcal{I }_{1}\) has the law of \(\mathcal{I }^{u(1-\delta )}\cap A\) under \(Q_{0}\), by (9.3), \(\mathcal{I }_{2}\) has the law of \(\mathcal{I }^{2u\delta }\) under \(Q_{0}\), by (2.18), \(\mathcal{I }_{1}\) and \(\mathcal{I }_{2}\) are \(\sigma (\eta )\times \mathcal A ^{\prime }-\)measurable and independent, and since \(\mathcal{I }(\eta )\cap A=\mathcal{I }(\eta -\theta )\cap A\bigcup \mathcal{I }(\theta )\cap A\) we get from (9.5), (9.6) and (9.7) that

$$\begin{aligned} Q\!\otimes \! Q^{\prime }[\mathcal{I }_{1}\!\cap \! A\subset \mathcal{I }(\eta )\!\cap \! A\subset \mathcal{I }_{1}\cup \mathcal{I }_{2}]\!\ge \!1-ce^{-cu \delta \mathrm{cap}(A)}\!\ge \!1-ce^{-cs^{c(\varepsilon )}}, \end{aligned}$$
(9.8)

were the second inequality holds because we require \(u,\delta \ge s^{-c(1- \varepsilon )}=s^{-c(\varepsilon )}\), similarly to in (8.8). This completes the proof of Proposition 5.4. \(\square \)

It remains to prove Lemma 9.2. In the proof we will extract from the Poisson point process \(\theta \) of intensity \(u\delta \kappa _{3}\) Poisson point processes of intensity \(u(1-\delta )\tilde{\kappa }_{4}^{i}\) (see (9.9) below). This will be possible because the “points” of a Poisson point process of intensity a multiple of \(\tilde{\kappa }_{4}^{i}\) is an i.i.d. vector of excursions with law \(P_{\bar{e}_{A}}^{\mathbb{Z }^{d}}[Y_{\cdot \wedge T_{B}}\in dw]\) (see (9.9)), which is also the law of the single excursions that make up the points of \(\theta \) (and because the number of “points” we need to construct the Poisson point processes of intensity \(u(1-\delta )\tilde{\kappa }_{4}^{i}\) will with high probability not exceed the number of points in \(\theta \)). Once we have these Poisson point processes we will use the following lemma of intensity measures to “thin” them to obtain Poisson point processes of intensity \(u(1-\delta )\kappa _{4}^{i}\), and these will be the \(\mu _{2},\mu _{3},..\) from the statement of Lemma 9.2.

Lemma 9.3

(\(N\ge 3,d\ge 3\)) If \(s\ge c(\varepsilon )\) and \(i\ge 2\)

$$\begin{aligned} \kappa _{4}^{i}\le \tilde{\kappa }_{4}^{i} \text{ where} \,\tilde{\kappa }_{4}^{i}(d(w_{1},\ldots ,w_{i}))=s^{-\frac{\varepsilon }{4}(i-1)}\text{ cap}(A)\otimes _{j=1}^{i}P_{\bar{e}_{A}}^{\mathbb{Z }^{d}}[Y_{\cdot \wedge T_{B}}\in dw_{j}],\nonumber \\ \end{aligned}$$
(9.9)

where \(\bar{e}_{A}(\cdot )=\frac{e_{A}(\cdot )}{\text{ cap}(A)}\) denotes the normalisation of the measure \(e_{A}(\cdot )\), see (2.6), (and should not be confused with the measure \(\bar{e}\) from (5.5) and the proof of Lemma 8.2).

Proof

Similarly to how it (in the proof of Lemma 8.3) followed from (8.16) that \(\kappa _{3}^{i}(W)\le \tilde{\kappa }_{3}^{i}(W)\) for all \(W\in \Gamma (\mathbb{T }_{N})^{i}\) that are products of measurable sets, it follows from \(\sup _{x\in \partial _{e}B}P_{x}^{\mathbb{Z }^{d}}[Y_{H_{A}}=y,Y_{H_{A}}<\infty ]\le cs^{-\frac{\varepsilon }{2}}\bar{e}_{A}(y)\le s^{-\frac{\varepsilon }{4}}\bar{e}_{A}(y)\) (see (2.8) and (2.10) and recall \(s\ge c(\varepsilon )\)) that \(\kappa _{4}^{i}(W)\le \tilde{\kappa }_{4}^{i}(W)\) for all such \(W\). But this implies (9.9) (by a monotone class argument, like at the end of the proof of Lemma 8.3). We omit the details. \(\square \)

We now prove Lemma 9.2.

Proof of Lemma 9.2

Note that

$$\begin{aligned} N\mathop {=}\limits ^{\mathrm{def}}\theta (\Gamma (\mathbb{Z }^{d})) \text{ is} \text{ Poisson} \text{ with} \text{ intensity} \,\,u\delta \text{ cap}(A). \end{aligned}$$
(9.10)

Similarly to in the proof of Lemma 8.2 (see (8.13)) we can construct \((\Omega ^{\prime },\mathcal A ^{\prime },Q^{\prime })\) appropriately to obtain i.i.d. \(\sigma (\theta )\times \mathcal A ^{\prime }-\)measurable trajectories \(w_{i},i\ge 1,\) independent of \(N\),

$$\begin{aligned} \text{ such} \text{ that} \,w_{i} \text{ has} \text{ law} \,\,P_{\bar{e}}[Y_{\cdot \wedge T_{B}}\in dw], \text{ and} \,\,\theta =\sum \limits _{i=1}^{N}\delta _{w_{i}}. \end{aligned}$$
(9.11)

Extend \((\Omega ^{\prime },\mathcal A ^{\prime },Q^{\prime })\) with independent Poisson random variables \(N_{i},i\ge 2,\) of respective intensities \(u(1-\delta )s{}^{-\frac{\varepsilon }{4}(i-1)}\text{ cap}(A)\) (also independent of \(w_{i},i\ge 1,\) and \(\theta \)) and let

$$\begin{aligned} \eta _{i}&= \sum \limits _{j=1}^{N_{i}}\delta _{(w_{K_{j}^{i},},w_{K_{j}^{i}+1},..,w_{K_{j}^{i}+(i-1)})},\quad \text{ where} \,K_{j}^{i}=\sum \limits _{k=1}^{i-1}kN_{k}+(j-1)i+1 \text{ and}\nonumber \\\end{aligned}$$
(9.12)
$$\begin{aligned} \rho&= \sum \limits _{i=\bar{K}+1}^{\bar{K}+N}\delta _{w_{i}}\quad \text{ where} \,\bar{K}=\sum \limits _{k=1}^{\infty }kN_{k}. \end{aligned}$$
(9.13)

The \(\eta _{i}\) “use” only \(w_{1},\ldots ,w_{\bar{K}}\), so on the event the event \(\{\bar{K}\le N\}\) we have \(\cup _{i\ge 2}\mathcal{I }(\eta _{i})\subset \mathcal{I }(\theta )\) (see (9.11)). Recalling (9.10) and that the \(N_{i},i\ge 2,\) are independent Poisson random variables of intensity less than \(us^{-\frac{\varepsilon }{4}(i-1)}\text{ cap}(A)\le u\delta s^{-\frac{\varepsilon }{8}(i-1)}\text{ cap}(A)=\lambda r^{i-1}\) (we require \(\delta \ge s^{-\frac{\varepsilon }{8}}\)), where \(\lambda =u\delta \text{ cap}(A)\) and \(r=s^{-\frac{\varepsilon }{8}}\le c\) (we require \(s\ge c(\varepsilon )\)), we have by (8.10)

$$\begin{aligned} Q\otimes Q^{\prime }[\cup _{i\ge 2}\mathcal{I }(\eta _{i})\subset \mathcal{I }(\theta )]\ge Q\otimes Q^{\prime }[\bar{K}\le N]\ge 1-ce^{-cu\delta \text{ cap}(A)}\!. \end{aligned}$$
(9.14)

Also, since the number of the \(w_{i}\) “used” by \(\rho \) is the same as the number “used by” \(\theta \)

$$\begin{aligned} \mathcal{I }(\theta )\subset \cup _{i\ge 2}\mathcal{I }(\eta _{i})\cup \mathcal{I }(\rho ) \text{ almost} \text{ surely.} \end{aligned}$$
(9.15)

Furthermore, because they “use different \(w_{i}\)” and \(N,N_{i},i\ge 2,w_{i},i\ge 1,\) are independent \(\rho ,\eta _{i},i\ge 2\) are independent \(\sigma (\theta )\times \mathcal A ^{\prime }-\)measurable point processes, where \(\rho \) has intensity \(u\delta \text{ cap}(A)P_{\bar{e}_{A}}[Y_{\cdot \wedge T_{B}}\in dw]=u\delta \kappa _{3}(dw)\) (see (5.10), (9.10) and (9.13)) and \(\eta _{i}\) has intensity \(u(1-\delta )\tilde{\kappa }_{4}^{i}\) (see (9.9), (9.12) and recall that \(N_{i}\) has intensity \((1-\delta )s^{-\frac{\varepsilon }{4}(i-1)}\text{ cap}(A)\)). By the inequality in (9.9) we have \(u(1-\delta )\kappa _{4}^{i}\le u(1-\delta )\tilde{\kappa }_{4}^{i}\). Together with the (very crude) bound \(u(1-\delta )\tilde{\kappa }_{4}^{i}\le u(1-\delta )\kappa _{4}^{i}+u\tilde{\kappa }_{4}^{i}\) this allows us to (similarly to under (8.6)) construct independent Poisson point processes \(\mu _{i},\mu _{i}^{^{\prime }},i\ge 1,\) such that \(\mu _{i}\) has intensity \(u(1-\delta )\kappa _{4}^{i}\), \(\mu _{i}^{^{\prime }}\) has intensity \(u\tilde{\kappa }_{4}^{i}\) and

$$\begin{aligned} \mu _{i}\le \eta _{i}\le \mu _{i}+\mu _{i}^{^{\prime }}\quad \text{ for} \,i\ge 2. \end{aligned}$$
(9.16)

By (9.14) and (9.16) we have

$$\begin{aligned} Q\otimes Q^{\prime }[\cup _{i\ge 2}\mathcal{I }(\mu _{i})\subset \mathcal{I } (\theta )]\ge 1-ce^{-cu\delta \,\mathrm{cap}(A)}. \end{aligned}$$
(9.17)

Now \(\mu _{i}^{^{\prime }}(\Gamma (\mathbb{Z }^{d}))\) are independent Poisson random variables of respective intensities \(us^{-\frac{\varepsilon }{4}(i-1)}\text{ cap}(A)\le \lambda r^{i-1}\), where \(\lambda =u\frac{\delta }{2}\text{ cap}(A)\) and \(r=s^{-\frac{\varepsilon }{8}}\) (we require \(\delta \ge 2s^{-\frac{\varepsilon }{8}}\)). Thus using (8.10) we see that \(\sum _{i\ge 2}i\times \mu _{i}^{^{\prime }}(\Gamma (\mathbb{Z }^{d}))\le N^{\prime }\) with probability at least \(1-ce^{-cu\delta \text{ cap}(A)}\), where \(N^{\prime }\) is a Poisson random variable of intensity \(u\frac{\delta }{2}\text{ cap}(A)\). Therefore we can, similarly how the \(\theta \) from Lemma 8.2 (not to be confused with the \(\theta \) here) was constructed (see (8.14)) construct a \(\sigma (\theta )\times \mathcal A ^{\prime }-\)measurable Poisson point process \(\rho ^{\prime }\) of intensity \(u\frac{\delta }{2}\kappa _{3}\) such that \(\mu _{i},i\ge 2,\rho ,\rho ^{\prime },\) are independent and

$$\begin{aligned} Q\otimes Q^{\prime }[\cup _{i\ge 2}\mathcal{I }(\mu _{i}^{^{\prime }})\subset \mathcal{I } (\rho ^{\prime })]\ge 1-ce^{-cu\delta \text{ cap}(A)}. \end{aligned}$$
(9.18)

Now construct a \(\sigma (\theta )\times \mathcal A ^{\prime }-\)measurable Poisson point process

$$\begin{aligned} \rho _{2} \text{ of} \text{ intensity} u \frac{3\delta }{2} \, \kappa _{3} \text{ by} \text{ setting} \,\,\rho _{2}=\rho +\rho ^{\prime }. \end{aligned}$$
(9.19)

Note that \(\mu _{i},i\ge 2,\rho _{2},\) are independent and and by (9.15), (9.16), (9.18) and (9.19)

$$\begin{aligned} Q\otimes Q^{\prime }[\mathcal{I }(\theta )\subset \bigcup \limits _{i\ge 2} \;\mathcal{I }(\mu _{i})\; \bigcup \; \mathcal{I }(\rho _{2})]\ge 1-ce^{-cu\delta \text{ cap(}A\text{)}}. \end{aligned}$$

Together with (9.17) this implies (9.4), so the proof of Lemma 9.2 is complete. \(\square \)

Now all components used in the proof of Proposition 5.4 have been proved, and thus the last piece of the proof of Theorem 3.2 is complete. Since Theorem 3.1 was reduced to Theorem 3.2 in Sect. 4, also the last part of the proof of Theorem 3.1 is done. We finish with the following remark.

Remark 9.4

(1) As mentioned in Remark 4.6 (2), a generalisation of the cover time result Theorem 3.1 to other graphs may be possible. Roberto Oliveira and Alan Paula are working on such a generalisation.

(2) In Corollary 3.4 we proved that the point process \(\mathcal{N }_{N}^{z}\) of points of \(\mathbb{T }_{N}\) not hit at time \(g(0)N^{d}\{\log N^{d}+z\}\), for \(z\in \mathbb{R }\), converges in law to a Poisson point process. It is an open question whether this can be generalised to show that the point process \(\mathcal{N }_{N}=\sum _{x\in \mathbb{T }_{N}}\delta _{(x/N, \frac{H_{x}}{g(0)N^{d}}-\log N^{d})}\) on the space \((\mathbb{R }/\mathbb{Z })^{d}\times \mathbb{R }\), from which \(\mathcal{N }_{N}^{z}\) can be recovered but which records also when the vertices are hit, converges in law to a Poisson point process of intensity \(e^{-z}dxdz\), where \(dx\) denotes Lebesgue measure on \((\mathbb{R }/\mathbb{Z })^{d}\) and \(dz\) denotes Lebesgue measure on \(\mathbb{R }\).

(3) It would be interesting to explicitly determine the dependence on \(\varepsilon \) of the constant \(c_{5}=c_{5}(\varepsilon )\) from the coupling Theorem 3.2. This constant arises as the minimum of a number of constants from sections 69, that appear in requirements on the level \(u\), the number of boxes \(n\), and the parameter \(\delta \). Several of the requirements cause us to choose \(c_5(\varepsilon ) \le c\varepsilon \); see for instance (6.8), (6.21), and above (8.15) and (9.14). Several others cause us to choose \(c_5(\varepsilon ) \le c(1-\varepsilon )\); see for instance above (7.30) and below (8.8) and (9.8). It is plausible that \(c_5\) can in fact be chosen to be \(c\min (\varepsilon ,1-\varepsilon )\), for a small constant \(c\) independent of \(\varepsilon \).