Abstract
We study the tail behavior for the maximum of discrete Gaussian free field on a 2D box with Dirichlet boundary condition after centering by its expectation. We show that it exhibits an exponential decay for the right tail and a double exponential decay for the left tail. In particular, our result implies that the variance of the maximum is of order 1, improving an \(o(\log n)\) bound by Chatterjee (Chaos, concentration, and multiple valleys, 2008) and confirming a folklore conjecture. An important ingredient for our proof is a result of Bramson and Zeitouni (Commun. Pure Appl. Math, 2010), who proved the tightness of the centered maximum together with an evaluation of the expectation up to an additive constant.
Similar content being viewed by others
1 Introduction
Denote by \(A_n \subset \mathbb Z ^2\) a box of side length \(n\), i.e., \(A = \{(x, y)\in \mathbb Z ^2: 0\le x, y\le n\}\), and let \(\partial A_n = \{v\in A_n: \exists u\in \mathbb Z ^2 {\setminus } A_n: v\sim u\}\). The discrete Gaussian free field (GFF) \(\{\eta _v: v\in A_n\}\) on \(A_n\) with Dirichlet boundary condition, is then defined to be a mean zero Gaussian process which takes value 0 on \(\partial A_n\) and satisfies the following Markov field condition for all \(v\in A_n{\setminus } \partial A_n\) : \(\eta _v\) is distributed as a Gaussian variable with variance \(1\) and mean equal to the average over the neighbors given the GFF on \(A_n{\setminus } \{v\}\) (see later for a definition of GFF using Green functions). Throughout the paper, we use the notation
We prove the following tail behavior for \(M_n\).
Theorem 1.1
There exist absolute constants \(C,c>0\) so that for all \(n\in \mathbb N \) and \(0\le \lambda \le (\log n)^{2/3}\)
The preceding theorem gives the tail behavior when the deviation is less than \((\log n)^{2/3}\). For \(\lambda \ge (\log n)^{2/3}\), by isoperimetric inequality for general Gaussian processes (see, e.g., Ledoux [16, Theorem 7.1, Eq. (7.4)]) and the simple fact that \(\max _v\text{ Var} \eta _v = 2\log n/\pi + O(1)\) (see Lemma 2.2), we have
Combined with Theorem 1.1, this immediately gives the order of the variance for \(M_n\). Before stating the result, let us specify some conventions for notations throughout the paper. The letters \(c\) and \(C\) denote absolute positive constants, whose values might vary from line to line. By convention, we denote by \(C\) large constants and by \(c\) small constants. Other absolute constants that appeared are fixed once and for all. If there exists an absolute constant \(C>0\) such that \(a_n = C b_n\) for all \(n\ge 1\), we write \(a_n = O(b_n)\); we write \(a_n = \Theta (b_n)\) if \(a_n = O(b_n)\) as well as \(b_n = O(a_n)\); if \(\limsup _{n\rightarrow \infty } a_n /b_n \rightarrow 0\), we write \(a_n = o(b_n)\). We are now ready to state the corollary.
Corollary 1.2
We have that \(\mathrm{Var} M_n = \Theta (1)\).
Corollary 1.2 improves an \(o(\log n)\) bound on the variance due to Chatterjee [7], thereby confirming a folklore conjecture (see Question (4) of [7]). An important ingredient for our proof is the following result on the tightness of the maximum of the GFF on 2D box due to Bramson and Zeitouni [6].
Theorem 1.3
[6] The sequence of random variables \(M_n - \mathbb E M_n\) is tight and
Previously to [6], Bolthausen et al. [3] proved that \((M_n - \mathbb E M_n)\) is tight along a deterministic subsequence \((n_k)_{k\in \mathbb N }\). Earlier works on the extremal values of GFF include Bolthausen et al. [2] who established the asymptotics for \(M_n\), and Daviaud [8] who studied the extremes for the GFF.
We compare our results with tail behavior for the maximum of the GFF on a binary tree. Interestingly, in the case of tree, the maximum exhibits an exponential decay for the right tail, but a Gaussian type decay for the left tail as opposed to the double exponential decay for 2D box. This is because in the case of 2D box, the Dirichlet boundary condition decouples the GFF near the boundary such that the GFF behaves almost independently close to the boundary. The same phenomenon also occurs for the event that all the GFFs are nonnegative: for a binary tree of height \(n\) the probability is about \(\text{ e}^{-\Theta (n^2)}\), and for a box of side length \(n\) the probability is about \(\text{ e}^{-\Theta (n)}\) (see Deuschel [9]).
Much more was known about the maximal displacement of branching Brownian motion (BBM). In their classical paper, Kolmogorov et al. [13] studied its connection with the so-called KPP-equation, from which it could be deduced that both the right and left tails exhibit exponential types of decay. The probabilistic interpretation of KPP-equation in terms of BBM was further exploited by Bramson [4]. Then the precise asymptotic tails were computed, and in particular a polynomial prefactor for the right tail was detected (this appears to be fundamentally different from the tail of Gumble distribution, which arise from the maximum of, say, i.i.d. Gaussian variables). See, e.g., Bramson [5] and Harris [12] for the right tail, and see Arguin et al. [1] for the left tail (the argument is due to De Lellis). In addition, Lalley and Sellke [14] obtained an integral representation for the limiting law of the centered maximum.
We now give the definition of GFF using the connection with random walks (in particular, Green functions). Consider a connected graph \(G = (V, E)\). For \(U \subset V\), the Green function \(G_U(\cdot , \cdot )\) of the discrete Laplacian is given by
where \(\tau _U\) is the hitting time to set \(U\) for random walk \((S_k)\), defined by (the notation applies throughout the paper)
The GFF \(\{\eta _v: v\in V\}\) with Dirichlet boundary on \(U\) is then defined to be a mean zero Gaussian process indexed by \(V\) such that the covariance matrix is given by Green function \((G_U(x, y))_{x, y\in V}\) (In general graph, it is typical to normalize the Green function by the degree of the target vertex \(y\). In the case of 2D lattices, this normalization is usually dropped since the degrees are constant). It is clear to see that \(\eta _v = 0\) for all \(v\in U\).
2 Proofs
In this section, we prove Theorem 1.1. We start with a brief discussion on the proof strategy, and then demonstrate the upper (lower) bounds for the right (left) tails in the subsequent four subsections.
2.1 A word on proof strategy
Our proof typically employs a two-level structure which involves either a partitioning or a packing for a 2D box \(A_n\) by (slightly) smaller boxes. In all the proofs, we use Theorem 1.3 to control the behavior in small boxes, and study “typical” events on small boxes with probability strictly bounded away from 0 and 1. The large deviation bounds typically come from gluing the small boxes together to a big box, with the probability either inverse proportional to the number of small boxes or exponentially small in the number of boxes.
By Theorem 1.3, there exists a universal constant \(\kappa >0\) such that for all \(n \ge 3 n^{\prime }\)
That is to say, in order to observe a difference of \(\lambda \) in the expectation for the maximum, the side length of the box has to increase (decrease) by a factor of \(\exp (\Theta (\lambda ))\). This suggests that the number of small boxes shall be \(\exp (\Theta (\lambda ))\) in our two-level structure. Depending on how the large deviation arises, this will yield a tail of either exponential or double exponential decay.
In order to construct the two-level structure, we use repeatedly the decomposition of Gaussian process: for a joint Gaussian process \((X, Y)\), we can write \(X\) as a sum of a (linear) function of \(Y\) and an independent Gaussian process \(X^{\prime }\). Here, we used a crucial fact that Gaussian processes possess linear structures where orthogonality implies independence. Furthermore, the next well-known property specific to GFF proves to be quite useful (see Dynkin [10, Theorem 1.2.2]).
Lemma 2.1
Let \(\{\eta _v\}_{v\in V}\) be a GFF on a graph \(G=(V, E)\). For \(U\subset V\), define \(\tau _U\) as in (3). Then, for \(v\in V\), we have
2.2 Upper bound on the right tail
In this subsection, we prove that for an absolute constant \(C, \lambda _0>0\)
Note that we could choose \(\lambda _0\) arbitrarily large by adjusting the constant \(C\) in Theorem 1.1. Let \(N = n \lceil \text{ e}^{\sqrt{\pi /8} (\lambda - \kappa - \alpha )} \rceil \), where \(\kappa \) is from (4) and \(\alpha > 0\) will be selected later. Denote by \(p = p_\alpha = \text{ e}^{-\sqrt{\pi /2} (\lambda - \kappa - \alpha )}\) and \(k = \lceil \text{ e}^{\sqrt{\pi /8} (\lambda - \kappa - \alpha )} \rceil \). It suffices to prove that \(\mathbb{P }(M_n - \mathbb E M_n \ge \lambda ) \le p\), and we prove it by contradiction. To this end, we assume that
and try to derive a contradiction.
Now, consider an \(N \times N\) 2D box \(A_N\) and let \(\{\eta _v : v\in A_N\}\) be a GFF on \(A_N\) with Dirichlet boundary condition. We partition \(A_N\) into \(k^2\) boxes of side length \(n\) and denote by \(\mathcal B \) the collection of these boxes. We abuse the notation \(\partial \mathcal B \) to denote the union of the boundary sets of the smaller boxes in \(\mathcal B \). For \(B\in \mathcal B \), we let \(\{g_v^B: v\in B\}\) be a GFF on \(B\) with Dirichlet boundary condition and we let \(\{\{g_v^B: v\in B\}\}_{B\in \mathcal B }\) be independent from each other and independent from \(\{\eta _v: v\in \partial \mathcal B \}\). Using the decomposition of Gaussian process, we can write that for every \(v\in B\subseteq A_N\)
Denote by \(\phi _v = \mathbb E (\eta _v \mid \{\eta _u: u\in \partial \mathcal B \})\). We note that \(\phi _v\) is a convex combination of \(\{\eta _u: u\in \partial \mathcal B \}\) where the linear coefficients are deterministic. Thus,
Denote by \(M_B = \sup _{v\in B} g^B_v\). It is clear that \(\{M_B : B \in \mathcal B \}\) is a collection of i.i.d. random variables and each of them is distributed as \(M_n\). Therefore, by (6), we obtain that \(\mathbb{P }(M_B \ge \mathbb E M_n + \lambda ) \ge p\). Using independence, we get
Let \(\chi \in B \subseteq A_N\) such that \(g^B_\chi = \sup _{B\in \mathcal B } \sup _{v\in B} g^B_v\). We see that \(\chi \) is random (obviously) and independent of \(\{\phi _v : v\in \partial \mathcal B \}\) by (8). Therefore, we obtain
Recalling (4) and our definition of \(N\), we thus derive that
However, Theorem 1.3 implies that there exists a universal constant \(\alpha (1/4) > 0\) such that \(\mathbb{P }(M_n - \mathbb E M_n \ge \alpha (1/4)) < 1/4\) for all \(n\in \mathbb N \). Setting \(\alpha = \alpha (1/4)\), we arrive at a contradiction and thus show that (6) cannot hold, thereby establishing (5).
2.3 Lower bound on the right tail
In this subsection, we analyze the lower bound on the right tail and aim to prove that for absolute constant \(c, \lambda _0>0\)
To prove the above lower bound, we consider a box \(A_{n^{\prime }}\) of side length \(n^{\prime } = n \text{ e}^{-\beta \lambda }\) in the center of \(A_n\), where \(\beta > 0\) is to be selected (note that since \(\lambda \le (\log n)^{2/3}\), we have \(n^{\prime }\ge 1\) is well defined). Let \(\{g_v: v\in A_{n^{\prime }}\}\) be a Gaussian free field on \(A_{n^{\prime }}\) with Dirichlet boundary condition and independent from \(\{\eta _v : v\in \partial A_{n^{\prime }}\}\). Analogous to (7), we can write that
where \(\phi _v = \mathbb E (\eta _v \mid \{\eta _u : u\in \partial A_{n^{\prime }}\})\) is a convex combination of \(\{\eta _u: u\in \partial A_{n^{\prime }}\}\). We wish to estimate the variance of \(\phi _v\). For this purpose, we need the following standard estimates on Green functions for random walks in 2D lattices. See, e.g., [15, Proposition 4.6.2, Theorem. 4.4.4] for a reference.
Lemma 2.2
For \(A\subset \mathbb Z ^2\), consider a random walk \((S_t)\) on \(\mathbb Z ^2\) and define \(\tau _{\partial A} = \min \{j\ge 0: S_j \in \partial A\}\) be the hitting time to \(\partial A\). For \(u, v\in A\), let \(G_{\partial A}(u, v)\) be the Green function as in (2). For a certain nonnegative function \(a(\cdot , \cdot )\) such that \(a(x, x) = 0\) and \(a(x, y) = \frac{2}{\pi } \log |x-y| + \frac{2\gamma \log 8}{\pi } + O(|x-y|^{-2})\), where \(\gamma \) is Euler’s constant. Then, we have
By the preceding lemma, we infer that for any \(u, w\in \partial A_{n^{\prime }}\),
Since \(\phi _v\) is a convex combination of \(\{\eta _u: u\in \partial A_{n^{\prime }}\}\), this implies that for all \(v\in A_{n^{\prime }}\)
By Theorem 1.3, there exists an absolute constant \(\alpha (1/2)\) such that
Let \(\chi \in A_{n^{\prime }}\) such that \(g_\chi = \sup _{v\in A_{n^{\prime }}} g_v\). Recalling that \(|\mathbb E M_n - \mathbb E M_{n^{\prime }}| \le 2\sqrt{2/\pi } \beta \lambda + O(\log \beta \lambda )+\kappa \) and that \(\lambda \ge \lambda _0\), we obtain that
where the first inequality follows from (11) and the independence between \(\chi \) and \(\{\phi _v: v\in A_{n^{\prime }}\}\) (analogous to (8)), and in the second inequality \(c>0\) is a small absolute constant. Setting \(\beta = \sqrt{\pi /8}\), we obtain the desired estimate (10).
2.4 Upper bound on the left tail
In this subsection, we give the upper bound for the lower tail of the maximum and prove the following for absolute constants \(C, c, \lambda _0>0\).
Let \(\alpha = \alpha (1/2)\) be defined as in (12). Denote by \(r = n \exp (-\sqrt{\pi /8}(\lambda - \alpha - \kappa -4))\) and \(\ell = n \exp (-\sqrt{\pi /8}(\lambda - \alpha - \kappa -4)/3)\). Assume that the left bottom corner of \(A_n\) is the origin \(o= (0, 0)\). Define \(o_i = (i\ell , 2r)\) for \(1\le i \le m = \lfloor n/2\ell \rfloor \). Let \(\mathcal C _i\) be a discrete ball of radius \(r\) centered at \(o_i\) and let \(B_i \subset \mathcal C (i)\) be a box of side length \(r/8\) centered at \(o_i\). Let \(\mathfrak{C }=\{\mathcal{C }_i: 1\le i\le m\}\) and \(\mathcal B = \{B_i: 1\le i\le m\}\). Analogous to (7), we can write
where \(\{g_v^B: v\in B\}\) is the projection of the GFF on \(\mathcal C \) with Dirichlet boundary condition on \(\partial \mathcal C \), and \(\{\{g_v^B: v\in B\} : B\in \mathcal B \}\) are independent of each other and of \(\{\eta _v : v\in \partial \mathfrak C \}\) (here \(\partial \mathfrak C = \cup _\mathcal{C \in \mathfrak C } \partial \mathcal C \)), and \(\phi _v = \mathbb E (\eta _v \mid \{\eta _u: u\in \partial \mathfrak C \})\) is a convex combination of \(\{\eta _u: u\in \partial \mathfrak C \}\). For every \(B\in \mathcal B \), define \(\chi _B \in B\) such that
Recalling (4), we get that \(\mathbb E M_n - \mathbb E M_{r/8} \le \lambda - \alpha \) (here we assume \(\lambda _0\) is large enough such that \(n > r/8\)).
Using an analogous derivation of (9), we get that
where we used definition of \(\alpha \) in (12). Let \(W = \{\chi _B: g_{\chi _B}^B \ge \mathbb E M_n - \lambda , B\in \mathcal B \}\). By independence, a standard concentration argument gives that for an absolute constant \(c> 0\)
It remains to study the process \(\{\phi _v: v\in W\}\). If there exists \(v\in W\) such that \(\phi _v > 0\), we have \(\sup _{u \in A_n} \eta _u > \mathbb E M_n - \lambda \). Thanks to independence, it then suffices to prove the following lemma.
Lemma 2.3
Let \(U\subset \cup _{B\in \mathcal B } B\) such that \(|U\cap B| \le 1\) for all \(B\in \mathcal B \). Assume that \(|U| \ge m/8\). Then, for some absolute constants \(C, c>0\)
To prove the preceding lemma, we need to study the correlation structure for the Gaussian process \(\{\phi _v: v\in U\}\).
Lemma 2.4
[15, Lemma 6.3.7] For all \(n\ge 1\), let \(\mathcal C (n) \subset \mathbb Z ^2\) be a discrete ball of radius \(n\) centered at the origin. Then there exist absolute constants \(c, C>0\) such that for all \(n\ge 1\) and \(x\in \mathcal C (n/4)\) and \(y\in \partial \mathcal C (n)\)
Write \(a_{v, w} = \mathbb{P }_v(\tau _{\partial \mathcal C } = \tau _w)\). The preceding lemma implies that \(c/r\le a_{v, w} \le C/r\) for all \(v\in B\subset \mathcal C \). Combined with Lemma 2.1, it follows that
Therefore, we have
In order to estimate the sum of Green functions, one could use Lemma 2.2. Alternatively, it is computation free if we apply the next lemma.
Lemma 2.5
[15, Proposition 6.4.1] For all \(n\ge 1\), let \(\mathcal C (n) \subset \mathbb Z ^2\) be a discrete ball of radius \(n\) centered at the origin. Then for all \(k< n\) and \(x\in \mathcal C (n) {\setminus } \mathcal C (k)\), we have
Now, write
where \(\tau ^+_{\partial \mathcal C } = \min \{k \ge 1: S_k \in \partial \mathcal C \}\) is the first returning time to \(\partial \mathcal C \). By the preceding lemma, we have
Therefore, by Markovian property we have
Combined with (16), this implies that
We also wish to bound the covariance between \(\phi _v\) and \(\phi _u\) for \(u, v\in U\). Assume \(u\in \mathcal C _i\) and \(v\in \mathcal C _j\) for \(i\ne j\). By (17), we see that
We incorporate the estimate for the above hitting probability in the next lemma.
Lemma 2.6
For any \(i\ne j\) and \(x\in \mathcal C _i\), we have
where \(C>0\) is a universal constant.
proof
We consider the projection of the random walk to the horizontal and vertical axes, and denote them by \((X_t)\) and \((Y_t)\) respectively. Define
It is clear that \(\tau _{\partial A_n}\le T_{_Y}\) and \(T_{_X} \le \tau _{\partial \mathfrak C {\setminus } \partial \mathcal C _i}\). Write \(t^\star = r \ell \). Since the number of steps spent on waling in the horizontal (vertical) axis is a Binomial distribution with parameter \(t\) and \(1/2\), an application of CLT yields that with probability at least \(1- \exp (-c t^\star )\) (here \(c>0\) is an absolute constant) the number of such steps is at least \(t^\star /3\) (and thus, at most \(2t^\star /3\)). Combined with standard estimates for 1-dimensional random walks (see, e.g., [18, Theorem 2.17, Lemma 2.21]), it follows that for a universal constant \(C>0\)
Using Markov property for random walk, we see that
where \(\varepsilon <1\) is an absolute constant. This completes the proof.\(\square \)
Combining the preceding lemma and (18), we obtain that (here we assume that \(\lambda _0\) is large enough)
Therefore, we have the following bounds on the correlation coefficients \(\rho _{u, v}\):
At this point, we wish to apply Slepian’s [20] comparison theorem (see also, [11, 17]).
Theorem 2.7
If \(\{\xi _i\mathrm{\,:}\, 1\le i \le n\}\) and \(\{\zeta _i\mathrm{\,:}\, 1\le i\le n\}\) are two mean zero Gaussian process such that
Then for all real numbers \(\lambda _1, \ldots , \lambda _n\),
The following is an immediate consequence.
Corollary 2.8
Let \(\{\xi _i \,\mathrm{:}\, 1\le i\le n\}\) be a mean zero Gaussian process such that the correlation coefficients satisfy \(0\le \rho _{i, j} \le \rho \le 1/2\) for all \(1\le i< j\le n\). Then,
Proof
Since we are comparing \(\xi _i\)’s with zero, it allows us to assume that \(\text{ Var} \xi _i = 1\) for all \(1\le i\le n\). Let \(\zeta _i = \sqrt{\rho } X + \sqrt{1-\rho ^2} Y_i\) where \(X\) and \(Y_i\)’s are i.i.d. standard Gaussian variables. It is clear that our processes \(\{\xi _i: 1\le i\le n\}\) and \(\{\zeta _i: 1\le i \le n\}\) satisfy (20). By Theorem 2.7, we obtain that
Since \(\{\zeta _i \le 0 \, \text{ for} \text{ all} \, 1\le i\le n\} \subseteq \{X \le -1/\sqrt{\rho }\} \cup \{Y_i \le 1/\sqrt{1-\rho ^2} \, \text{ for} \text{ all} \, 1\le i\le n\}\), we have
Altogether, this completes the proof.\(\square \)
Proof of Lemma 2.3
Recall definitions of \(r\), \(\ell \) and \(m\). The desired estimate follows from an application of the preceding corollary to \(\{\phi _v: v\in U\}\) and the correlation bounds (19) (here we assume that \(\lambda \) is large enough such that \(\rho _{u, v}\le 1/2\) for all \(u\ne v\)).\(\square \)
Combining Lemma 2.3 and (14), we finally complete the proof for the upper bound on the left tail as in (13).
2.5 Lower bound on the left tail
In this subsection, we study the lower bound for the lower tail of the maximum and show that for absolute constants \(C , c , n_0, \lambda _0> 0\)
The proof consists of two steps: (1) We estimate the probability for \(\sup _{v\in B} \eta _v \le \mathbb E M_n - \lambda \) for a small box \(B\) in \(A_n\). (2) Applying FKG inequality for GFF, we bootstrap the estimate on a small box to the whole box.
By Theorem 1.3, there exists an absolute constant \(\alpha ^* > 0\) such that
We first consider the behavior of GFF in a box of side length \(\ell \), where
Lemma 2.9
Let \(B\subseteq A_n\) be a box of side length \(\ell \). Then,
In order to prove the lemma, let \(B^{\prime }\) be a box of side length \(2\ell \) that has the same center as \(B\), and let \(\hat{B} = B^{\prime } \cap A_n\). Consider the GFF \(\{g_v: v\in \hat{B}\}\) on \(\hat{B}\) with Dirichlet boundary condition (on \(\partial \hat{B}\)). We wish to compare \(\{\eta _v: v\in B\}\) with \(\{g_v: v\in B\}\). For \(u, v\in B\), let
be the correlations coefficients of two GFFs under consideration.
Lemma 2.10
For all \(u, v\in B\), we have \(\rho _{u, v} \ge \hat{\rho }_{u, v}\) for all \(u, v \in B\).
Proof
Since by definition \(\hat{B} \subset A_n\), we see that \(\tau _{\partial \hat{B}} \le \tau _{\partial A_n}\) deterministically for a random walk started from an arbitrary vertex in \(B\). Note that
Altogether, we obtain that
\(\square \)
We next compare the variances for the two GFFs.
Lemma 2.11
For all \(v\in B\), we have that
Proof
It suffices to compare the Green functions \(G_{\partial A_n}(v, v)\) and \(G_{\partial \hat{B}}(v, v)\). We can decompose them in terms of the hitting points to \(\partial \hat{B}\) and obtain that
Note that for \(w\in \partial \hat{B} \cap \partial A_n\), we have \(G_{\partial A_n}(w, v) = 0\). For \(w\in \partial \hat{B} {\setminus } \partial A_n\), we see that \(|v-w| \ge \ell \) by our definition of \(\hat{B}\). Therefore, by Lemma 2.2, we have
Since \(|v-w|\ge \ell \) for \(w\in \partial \hat{B} {\setminus } \partial A_n\), Lemma 2.2 gives that
where we used the assumption that \(\lambda \le (\log n)^{2/3}\). Altogether, we get that
completing the proof.\(\square \)
We will need the following lemma to handle some technical issues.
Lemma 2.12
For a graph \(G = (V, E)\), consider \(V_1\subset V_2 \subset V\). Let \(\{\eta ^{(1)}_v\}_{v\in V}\) and \(\{\eta ^{(2)}_v\}_{v\in V}\) be GFFs on \(V\) such that \(\eta ^{(1)}|_{V_1} = 0\) and \(\eta ^{(2)}|_{V_2} = 0\), respectively. Then for any number \(t \in \mathbb R \)
Proof
Note that the conditional covariance matrix of \(\{\eta ^{(1)}_v\}_{v\in U}\) given the values of \(\{\eta ^{(1)}_v\}_{v\in V_2{\setminus } V_1}\) corresponds to the covariance matrix of \(\{\eta ^{(2)}_v\}_{v\in U}\). This implies that
where on the right hand side \(\{\eta ^{(2)}_v: v\in U\}\) is independent of \(\{\eta ^{(1)}_u: u\in V_2{\setminus } V_1\}\). Write \(\phi _v = \mathbb E (\eta ^{(1)}_v \mid \{\eta ^{(1)}_u: u\in V_2 {\setminus } V_1\})\). Note that \(\phi _v\) is a linear combination of \(\{\eta ^{(1)}_u: u\in V_2 {\setminus } V_1\}\), and thus a mean zero Gaussian variable. By the above identity in law, we derive that
where we denote by \(\xi \in U\) the maximizer of \(\{\eta ^{(2)}_u: u\in U\}\) and the second transition follows from the independence of \(\{\eta ^{(1)}_v\}\) and \(\{\phi _v\}\).\(\square \)
We are now ready to give
Proof of Lemma 2.9
Write \(b_v = \sqrt{\text{ Var} \eta _v / \text{ Var} g_v}\) for every \(v\in B\). By Lemma 2.11, we see that \(b_v \le 1 + (1/2+o(1))(\log (n/\ell )+O(1))/\log n\) for all \(v\in B\). Consider the Gaussian process defined by \(\xi _v = \eta _v/b_v\). By Lemma 2.10, we see that \(\{\xi _v: v\in B\}\) and \(\{g_v: v\in B\}\) satisfy the assumption in Theorem 2.7, and thus
Plugging into \(\gamma = \mathbb E M_{2\ell } + \alpha ^*\) and using (22) and Lemma 2.12 (we need to use Lemma 2.12 as the box \(\hat{B}\) might not be a squared box of side-length \(2\ell \) but a subset of that), we obtain that
Also, By definition of \(\ell \) and (4) as well as our assumption that \(\lambda \le (\log n)^{2/3}\), we see that
Therefore, for large constants \(\lambda _0, n_0\), we can deduce that
where we used Theorem 1.3 and the definition of \(\ell \) in (23). Altogether, we deduce that
\(\square \)
Now, we wish to apply FKG inequality and obtain the estimate on the probability \(\sup _{v\in A_n} \eta _v \le \mathbb E M_n - \lambda \). Pitt [19] proves that the FKG inequality holds for a Gaussian process with nonnegative covariances. Since clearly the GFF has nonnegative covariances, the FKG inequality holds for GFF.
Partition \(A_n\) into a union of boxes \(\mathcal B \) where each of the boxes is of side length at most \(\ell \). We choose \(\mathcal B \) in a way such that \(|\mathcal B |\) is minimized. Clearly, \(|\mathcal B | \le (\lceil n/\ell \rceil )^2\). Observing that the event \(\{\sup _{v\in B} \eta _v \le \mathbb E M_n - \lambda \}\) is decreasing for all \(B\in \mathcal B \), we apply FKG inequality and Lemma 2.9, and conclude that
Recalling the definition of \(\ell \) as in (23), this completes the proof of (21).
References
Arguin, L.-P., Bovier, A., Kistler, N.: The genealogy of extremal particles of branching brownian motion. Preprint, available at http://arxiv.org/abs/1008.4386
Bolthausen, E., Deuschel, J.-D., Giacomin, G.: Entropic repulsion and the maximum of the two-dimensional harmonic crystal. Ann. Probab. 29(4), 1670–1692 (2001)
Bolthausen, E., Deuschel, J.-D., Zeitouni, O.: Recursions and tightness for the maximum of the discrete, two dimensional gaussian free field. Electron. Commun. Probab. 16, 114–119 (2011)
Bramson, M.: Maximal displacement of branching Brownian motion. Commun. Pure Appl. Math. 31(5), 531–581 (1978)
Bramson, M.: Convergence of solutions of the Kolmogorov equation to travelling waves. Mem. Am. Math. Soc. 44(285), iv+190 (1983)
Bramson, M., Zeitouni, O.: Tightness of the recentered maximum of the two-dimensional discrete gaussian free field. Commun. Pure Appl. Math. (2010)
Chatterjee, S.: Chaos, concentration, and multiple valleys. Preprint, available at http://arxiv.org/abs/0810.4221 (2008)
Daviaud, O.: Extremes of the discrete two-dimensional Gaussian free field. Ann. Probab. 34(3), 962–986 (2006)
Deuschel, J.-D.: Entropic repulsion of the lattice free field. II. The \(0\)-boundary case. Commun. Math. Phys. 181(3), 647–665 (1996)
Dynkin, E.B.: Markov processes and random fields. Bull. Am. Math. Soc. (N.S.) 3(3), 975–999 (1980)
Fernique, X.: Regularité des trajectoires des fonctions aléatoires gaussiennes. In: École d’Été de Probabilités de Saint-Flour, IV-1974, pages 1–96. Lecture Notes in Math., vol. 480. Springer, Berlin (1975)
Harris, S.C.: Travelling-waves for the FKPP equation via probabilistic arguments. Proc. R. Soc. Edinb. Sect. A 129(3), 503–517 (1999)
Kolmogorov, A., Petrovsky, I., Piskunov, N.: Etude de l’quation de la diffusion avec croissance de la quantit de matire et son application un problme biologique. Bulletin Universit d’Etat Moscou, Bjul. Moskowskogo Gos. Univ. (1937)
Lalley, S.P., Sellke, T.: A conditional limit theorem for the frontier of a branching Brownian motion. Ann. Probab. 15(3), 1052–1061 (1987)
Lawler, G.F., Limic, V.: Random walk: a modern introduction, volume 123 of Cambridge Studies in Advanced Mathematics. Cambridge University Press, Cambridge (2010)
Ledoux, M.: The concentration of measure phenomenon, volume 89 of Mathematical Surveys and Monographs. American Mathematical Society, Providence (2001)
Ledoux, M., Talagrand, M.: Probability in Banach spaces, volume 23 of Ergebnisse der Mathematik und ihrer Grenzgebiete (3) [Results in Mathematics and Related Areas (3)]. Springer, Berlin (1991). (Isoperimetry and processes)
Levin, D.A., Peres, Y., Wilmer, E.L.: Markov chains and mixing times. American Mathematical Society, Providence (2009). (with a chapter by James G. Propp and David B. Wilson)
Pitt, L.D.: Positively correlated normal variables are associated. Ann. Probab. 10(2), 496–499 (1982)
Slepian, D.: The one-sided barrier problem for Gaussian noise. Bell Syst. Tech. J. 41, 463–501 (1962)
Acknowledgments
We thank Tonci Antunovic for a careful reading of an early manuscript with valuable suggestions on exposition, thank Ofer Zeitouni for helpful communications, thank Yuval Peres for locating reference [10], and thank Sourav Chatterjee for helpful comments on an earlier version of the manuscript. We also warmly thank the anonymous referees for numerous useful comments, which lead to a significant improvement in exposition.
Author information
Authors and Affiliations
Corresponding author
Additional information
Most of the work was carried out when the author was supported partially by Microsoft Research.
Rights and permissions
About this article
Cite this article
Ding, J. Exponential and double exponential tails for maximum of two-dimensional discrete Gaussian free field. Probab. Theory Relat. Fields 157, 285–299 (2013). https://doi.org/10.1007/s00440-012-0457-9
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00440-012-0457-9