A random Schrödinger operator associated with the Vertex Reinforced Jump Process on infinite graphs

By Christophe Sabot and Xiaolin Zeng

Abstract

This paper concerns the vertex reinforced jump process (VRJP), the edge reinforced random walk (ERRW), and their relation to a random Schrödinger operator. On infinite graphs, we define a 1-dependent random potential extending that defined by Sabot, Tarrès, and Zeng on finite graphs, and consider its associated random Schrödinger operator . We construct a random function as a limit of martingales, such that when the VRJP is recurrent, and is a positive generalized eigenfunction of the random Schrödinger operator with eigenvalue , when the VRJP is transient. Then we prove a representation of the VRJP on infinite graphs as a mixture of Markov jump processes involving the function , the Green function of the random Schrödinger operator, and an independent Gamma random variable. On , we deduce from this representation a zero-one law for recurrence or transience of the VRJP and the ERRW, and a functional central limit theorem for the VRJP and the ERRW at weak reinforcement in dimension , using estimates of Disertori, Sabot, and Tarrès and of Disertori, Spencer, and Zimbauer. Finally, we deduce recurrence of the ERRW in dimension for any initial constant weights (using the estimates of Merkl and Rolles), thus giving a full answer to the question raised by Diaconis. We also raise some questions on the links between recurrence/transience of the VRJP and localization/delocalization of the random Schrödinger operator .

1. Introduction

This paper concerns the Vertex Reinforced Jump Process (VRJP) and the Edge Reinforced Random Walk (ERRW) and their relation with a random Schrödinger operator associated with a stationary 1-dependent random potential (i.e., the potential is independent at distance larger than or equal to 2).

The VRJP is a continuous time self-interacting process introduced in Reference 5, investigated on trees in Reference 2Reference 3 and on general graphs in Reference 20Reference 21. We first recall its definition. Let be an undirected graph with finite degree at each vertex. We write if , and is an edge of the graph. We always assume that the graph is connected and has no trivial loops (i.e., vertex such that ). Let be a set of positive conductances, , . The VRJP is the continuous time process on , starting at time at some vertex , which, conditionally on the past at time , if , jumps to a neighbour of at rate

where

In Reference 20 Sabot and Tarrès introduced the following time change of the VRJP

where is the following increasing function

We call this process the VRJP in exchangeable time scale and denote by its law starting from the vertex . When the graph is finite, it is proved in Theorem 2 of Reference 20 that the VRJP in exchangeable time scale is a mixture of Markov jump processes. More precisely, there exists a random field such that is a mixture of Markov jump processes with jump rates from to

The law of is explicit; cf. Reference 20, Theorem 2 and the forthcoming Theorem B. It appears to be a marginal of a supersymmetric -field which had been investigated previously by Disertori, Spencer, and Zirnbauer (cf. Reference 9, Reference 10, Reference 24). As a consequence of this representation and of Reference 9, Reference 10, it was proved in Reference 20 that when the graph has bounded degree, there exists a real such that if for all , then the VRJP is positively recurrent; more precisely, is a mixture of positive recurrent Markov jump processes. When the graph is the grid , with , there exists such that if for all , the VRJP is transient. Hence, it shows a phase transition between recurrence and transience in dimension . The question of the representation of the VRJP on infinite graphs as a mixture of Markov jump processes is nontrivial, especially in the transient case. It is possible to prove such a representation by a weak convergence argument, following Reference 16, but it gives little information on the mixing law. In this paper we prove such a representation involving the Green function and a generalized eigenfunction of a random Schrödinger operator.

Let us give a flavour of the main results of the paper in the case of the VRJP on with constant. We construct a positive 1-dependent random potential (i.e., two subsets of the ’s are independent if their indices are at least at distance 2) and with marginal given by inverse of Inverse Gaussian law with parameters . This field is a natural extension to infinite graphs of the field defined by Sabot, Tarrès, and Zeng in Reference 22. We consider the random Schrödinger operator

where is the usual discrete (nonpositive) Laplacian and is the multiplication operator defined by . Hence, it corresponds to the Anderson model with a random potential which is not i.i.d. but only stationary and 1-dependent. When the VRJP is transient, we prove that there exists a positive generalized eigenfunction of with eigenvalue 0, stationary and ergodic. Let be defined by

where is the Green function (which happens to be well defined in an appropriate sense) and is an extra random variable independent of the field with law . We prove the following representation for the VRJP: the VRJP in exchangeable time scale starting from the point is a mixture of Markov jump processes with jump rates from to

When the VRJP is recurrent, the same representation is valid with . In fact, the function is almost surely the limit of a martingale, the limit being positive when the VRJP is transient and 0 when the VRJP is recurrent. It is remarkable that when the VRJP is recurrent it can be represented as a mixture with -measurable jump rates, but when the VRJP is transient it involves an extra independent Gamma random variable. This representation extends to infinite graphs the representation given in Reference 22 for finite graphs. A new feature appears in the transient case, where the generalized eigenfunction is involved in the representation. We suspect that recurrence/transience of the VRJP is related to localization/delocalization of the random Schrödinger operator at the bottom of the spectrum.

The representation (Equation 1.2) has several consequences on the VRJP and the ERRW. The ERRW is a reinforced process introduced by Coppersmith and Diaconis in 1986 (see section 2.5 for a definition). The recurrence of the two-dimensional ERRW is a famous open question raised by Diaconis; see Reference 4Reference 12Reference 17Reference 18 for early references. Important progress has been made recently in the understanding of this process. In particular, in Reference 20, an explicit relation between the ERRW and the VRJP was stated, thus somehow reducing the analysis of the ERRW to that of the VRJP. In Reference 1Reference 20, it was proved by rather different methods that the ERRW on any graph with bounded degree at strong enough reinforcement is positive recurrent. In Reference 8, it was proved that the ERRW is transient on , , at weak reinforcement.

The representation (Equation 1.2) allows us to complete the picture both in dimension 2 and in the transient regime. More precisely, we prove a functional central limit theorem for the ERRW and for the discrete time process associated with the VRJP in dimension at weak reinforcement, using the estimates of Reference 8Reference 10. Using the polynomial estimate provided by Merkl and Rolles in Reference 17, we are able to prove recurrence of ERRW on for all initial constant weights, hence giving a full answer to the question raised by Diaconis.

2. Statements of the results

2.1. Notation

We denote by (resp. ) the set of nonnegative (resp. positive) reals.

Let be an undirected, locally finite, connected graph without trivial loops or multiple edges. For , write if is a neighbor of . We write for the graph distance in , and for two subsets of , we define . We suppose given, for each edge , a positive real , which is understood as the conductance of the edge . In this case we call a graph with conductances.

Convention.

We adopt the notation for the sum on all undirected edges , counting each edge only once.

When is a real vector indexed by the vertices and , we write for the restriction of to , i.e., . When is a real function on and , , we write for the restriction of to , i.e., .

It will be convenient to define the continuous time processes that appear in the text on the same canonical space. In the rest of this paper we will denote by the space of càdlàg functions from to . The law of the VRJP in an exchangeable time scale, as defined in Equation 1.1 and starting from , will be denoted by , which is a probability on . The VRJP will always be defined on the canonical space, and will denote the canonical process defined by for .

Remark 1.

We do not allow multiple edges or trivial loops since it does not bring more generality to the VRJP. Indeed, from its definition, it follows that the VRJP on a graph with multiple edges and trivial loops has the same law as the VRJP on the graph where trivial loops are removed and multiple edges are replaced by a single edge by summing the conductances of the multiples edges. Similarly, the law on random potentials that appears in the rest of this paper can always be reduced to graphs without multiple edges or trivial loops. Nevertheless, in section 5 it simplifies notation to allow trivial loops.

2.2. Representation of the VRJP on infinite graphs

Define the operator by

We define below a probability distribution on potentials on the graph. A potential on the graph will generically be denoted . With the potential , we associate the Schrödinger operator on

where represents the operator of multiplication by the potential (or equivalently the diagonal operator with diagonal terms ).

We denote by

where means that the restriction of to is positive definite. Obviously, since when the restriction of is the real . We endow with its Borelian -field denoted .

The following statement extends the random potential defined in Reference 22, Theorem 1, to infinite graphs.

Proposition 1.

Let be a graph with conductances as defined in section 2.1. There exists a unique probability distribution defined on , such that for any finite subset and any :

In particular, on the probability space , we have the following properties:

1-dependence: If are such that , then the random variables and are independent.

Reciprocal inverse Gaussian marginals: For , the random variable has an inverse Gaussian distribution with parameter where .

Remark 2.

On finite graphs, the density of is explicit; cf. Reference 22, Theorem 1, and Theorem A below.

In the rest of this paper, the probability space will be considered as the canonical space of random potentials on the graph. We write for the expectation with respect to . We will introduce several random variables on this probability space and adopt the following notation: when is a measurable function, we will write for the associated random variable and for its realization on the potential . In particular, we will write for the random Schrödinger operator defined above. By abuse of notation, we sometimes consider for or for as random variables (more precisely, the random variables are and ).

Definition 1.

Let be an increasing sequence of finite connected subsets of such that

For , we define as the sub--field generated by the random variable . For and , we define a random operator by

For and , we define a random function as the unique solution of the equation

By definition, the random variables and are -measurable.

The fact that there is a unique solution to the equation defining is elementary; see the proof in section 4.2.

Our main theorem is the following.

Theorem 1.
(i)

For all , the sequence of random variables is nondecreasing and converges a.s. to

Moreover, -almost surely, and the limit does not depend on the choice of the sequence of subsets .

(ii)

Under the probability , for all , is a positive -martingale. It converges a.s. to a random variable , such that a.s., and the limit does not depend on the choice of the increasing sequence . Moreover, the quadratic variation of the vectorial martingale is given a.s. by

In particular, is bounded in if and only if .

(iii)

For any real and , we define

For and , denote by the law of the Markov jump process which starts at and jumps from to at rate

Then the VRJP in an exchangeable time scale (defined in section 2.1 with conductances and starting from is a mixture of these Markov jump processes and has law

(iv)

For -almost all , all and all , we have

the Markov process is transient if and only if for all

the Markov process is recurrent if and only if for all .

N.B.

Note that is well defined for -almost all and all by (i) and (ii).

Notation.

We denote by

the probability distribution which appears in Equation 2.4, under which is -distributed and independent of . In general, we simply write for and consider it as a random variable on the probability space .

Remark 3.

When the VRJP is recurrent, then , and the representation of the VRJP Equation 2.4 only involves the variable and not .

Remark 4.

The representation (Equation 2.3) extends to infinite graphs the representation provided in Reference 22, Theorem 2, for finite graphs. An interesting new feature appears in the transient regime, where the generalized eigenfunction and the extra gamma random variable enter the expression of . As it appears in the proof, the eigenfunction can be interpreted as the mixing field of a VRJP starting from infinity.

Denote by the first return time to by . The point Theorem 1(iv) is in fact a consequence of the following more precise assertion.

Proposition 2.

We have, for -almost all , for all and , ,

where . In particular, if and only if

Using Doob’s transform, the law of the process conditioned on the event or can be computed, and it takes a rather nice form, both under the law or under the law for -almost all . We provide these formulae in section 7.

A natural question that emerges from point Theorem 1(iv) is that of a 0-1 law for transience/recurrence. We provide an answer below in the case of vertex transitive graphs with conductances. We say that is vertex transitive if the group of automorphisms of that leaves invariant is transitive on vertices. In particular, this is the case for the cubic lattice with constant conductances . Denote by the group of automorphisms that leave invariant.

Proposition 3.

If is vertex transitive and is infinite, then under the distribution , the random variables , , are stationary and ergodic for the group of transformations . Moreover, the VRJP is either recurrent or transient, i.e.,

or

In the first case for all , a.s., in the second case for all , a.s.

N.B.

The action of on is for .

2.3. Relation with random Schrödinger operators

Let us now relate Theorem 1 to the properties of the random Schrödinger operator associated with the random potential under the law , defined in Equation 2.1 and Proposition 1.

Theorem 2.

Under the followoing hold:

(i)

The spectrum of is a.s. included in .

(ii)

The operator is the inverse of in the following sense: for all a.s.

(iii)

We have a.s. for all .

(iv)

In the case of the grid and when is constant, and are stationary ergodic for the spacial shift. Moreover, in the transient case, is a.s. a positive generalized eigenfunction with eigenvalue in the sense that and has at most polynomial growth. More precisely, for all and , a.s. there exists a random integer such that

2.4. Functional central limit theorem

We denote by the discrete time process that describes the successive jumps of . From Theorem 1(iii), under , is a mixture of Markov chains starting from and with conductances

under the probability distribution .

We prove below a functional central limit theorem for the discrete time VRJP on , , at weak reinforcement (i.e., for large enough).

Theorem 3.

Consider the cubic graph , , with constant conductances . Denote

There exists such that if , the discrete time VRJP satisfies a functional central limit theorem (i.e., under for any real and converges in law (for the Skorokhod topology) to a -dimensional Brownian motion with nondegenerate isotropic diffusion matrix , for some .

2.5. Consequences for the ERRW

The ERRW is a famous discrete time process introduced in 1986 by Coppersmith and Diaconis Reference 4Reference 12.

Endow the edges of the graph with some positive weights . Let be a random process that takes values in , and let be the filtration of its past. For any , , let

be the number of crossings of the (undirected) edge up to time plus the initial weight .

Then is called the ERRW with starting point and weights , if and, for all ,

We denote by the law of the ERRW starting from the initial vertex . We will assume that the ERRW is defined on the canonical space i.e., that is the canonical process on .

Important progress has been made in the last ten years in the understanding of this process; cf., e.g., Reference 1Reference 8Reference 17Reference 20. In particular, in was proved in 2012 by Sabot and Tarrès in Reference 20 and Angel, Crawford, and Kozma in Reference 1, on any graph with bounded degree at strong reinforcement (i.e., for for some fixed ), that the ERRW is a mixture of positive recurrent Markov chains. It was proved by Disertori, Sabot, and Tarrès in Reference 8 that on , , the ERRW is transient at weak reinforcement; i.e., for for some fixed .

From Theorem 1 of Reference 20, we know that the ERRW has the law of a VRJP in independent conductances. More precisely, consider as independent random variables with gamma distribution with parameters . Consider the VRJP in conductances and its underlying discrete time process . Then the annealed law of (after expectation with respect to ) is that of the ERRW with initial weights . Hence, we can apply Theorem 1 at fixed and then integrate on . We thus consider the joint law on obtained from after randomization with respect to . More formally, let be the probability distribution on such that under if the random variables are independent with gamma distribution with parameters , then is the probability distribution on such that for any bounded measurable test function ,

In the rest of this paper, , will denote the corresponding marginal distributions, and is the marginal. (By definition, is supported on the set of such that .) From Theorem 1 we see that the ERRW starting from is a mixture of reversible Markov chains with conductances

where is defined in Theorem 1 and are distributed according to . More formally, if denotes the law of the Markov chain starting at and with conductances , then

An important point is that we keep the 1-dependence of the field , after taking expectation with respect to .

Proposition 4.

Under , is -dependent: if are such that , then and are independent.

Proof.

Indeed, from Proposition 1, the Laplace transform of under only involves the conductances for or in . This implies that, if , the joint Laplace transform of and is still the product of Laplace transforms even after taking expectation with respect to the random variables , i.e., under .

This yields a counterpart of Proposition 3 for the ERRW.

Proposition 5.

Assume is vertex transitive with automorphism group , and infinite. Then under the distribution , the random variables , , , are stationary and ergodic for the group of transformations . Moreover, the ERRW is either recurrent or transient, i.e.,

or

In the first case for all a.s. in the second case for all a.s.

N.B.

The action of on and is , for .

Remark 5.

In Reference 16 it was proved on infinite graphs that the ERRW is a mixture of Markov chains, obtained as a weak limit of the mixing law of the ERRW on finite approximating graphs. The difference in the representation we give in (Equation 2.9) is that the random variables , are obtained as almost sure limits and hence are measurable functions of the random variables . This yields stationarity and ergodicity, which are the key ingredients in the 0-1 law, and in the forthcoming Theorems 4 and 5.

Remark 6.

It seems that this 0-1 law is new, both for the VRJP and the ERRW. In Reference 16 it was proved that if the ERRW comes back with probability 1 to its starting point, then it visits infinitely often all points a.s., which is a weaker result. This was proved using the representation of the ERRW as mixture of Markov chains of Reference 16. (A short proof of this last result can also be given, cf. Reference 23.)

We now give a counterpart of Theorem 3 for the ERRW. It is a consequence of Theorem 1 and of the delocalization result proved by Disertori, Sabot, and Tarrès in Reference 8.

Theorem 4.

Consider the cubic graph , , with constant weights . Denote

There exists such that if , the ERRW satisfies a functional central limit theorem (i.e., under for any real and converges in law (for the Skorokhod topology) to a -dimensional Brownian motion with nondegenerate isotropic diffusion matrix , for some .

Finally, we can deduce recurrence of the ERRW in dimension 2 from Theorem 1, Proposition 5, and the estimates obtained by Merkl and Rolles in Reference 15Reference 17.⁠Footnote1

1

We are grateful to Franz Merkl and Silke Rolles for a useful discussion on that subject.

Theorem 5.

The ERRW on with constant weights is a.s. recurrent, i.e.,

In Reference 15Reference 17, by a Mermin–Wagner-type argument, Merkl and Rolles proved a polynomial decrease of the form

for some constants , , depending only on , and where is the conductance at the site for the mixing measure of the ERRW, uniformly for a sequence of finite approximating graphs. When , it does not give by itself enough information to prove recurrence. It was used in the case of a diluted two-dimensional graph to prove positive recurrence at strong reinforcement. The extra information given by the representation (Equation 2.9) and the stationarity of implies that the polynomial estimate (Equation 2.10) is incompatible with and hence is incompatible with transience. Detailed arguments are provided in section 8.

Remark 7.

We expect similarly that the two-dimensional VRJP with constant conductances is recurrent. This would be implied by an estimate of the type (Equation 2.10) for the mixing field of the VRJP, which is still not available. More precisely, we can see from the proof of Theorem 5 in section 8 that recurrence of the two-dimensional VRJP would be implied by Theorem 1, Proposition 3, and an estimate of the type

for and a positive function such that , where is the mixing field of the VRJP starting from 0 (cf. Theorem B) on finite boxes with wired boundary condition as in section 4.2. We learned from Kozma and Peled that they have a proof of such an estimate.

2.6. Open questions

The most important question certainly concerns the relation between the properties of the VRJP and the spectral properties of the random Schrödinger operator . For example on with constant weights , is recurrence/transience of the VRJP related to the localized/delocalized regimes of ? A more precise question would be, Does the transient regime of the VRJP coincide with the existence of extended states at least at the bottom of the spectrum of ? It might at first seem inconsistent to expect extended states at the bottom of the spectrum since the Anderson model with i.i.d. potential is expected to be localized at the edges of the spectrum (a fact which is proved in several cases). But this localization is a consequence of Lifshitz tails, and there are good reasons to expect that Lifshitz tails fail for the potential , which is not i.i.d. but 1-dependent. Indeed, the bottom of the spectrum of is 0, and it does not coincide with the minimum of the support of the distribution of translated by the spectrum of , as it is the case for i.i.d. potential. In fact, on a finite set, the minimum of the spectrum is reached on the set which is a set of codimension 1, hence it is “big”.

Another natural question concerns the uniform integrability of the martingale . Let us ask a more precise question: Is it true (at least for with constant weights) that transience of the VRJP implies that the martingale is bounded in ? It is quite natural to expect such a property from relation (Equation 5.2) since appears to be the quadratic variation of . This would have several consequences. First, it would imply that in dimension , the VRJP satisfies a functional central limit theorem as soon as the VRJP is transient, by the same argument as that of the proof of Theorem 3. It would also imply directly that the VRJP is recurrent as soon as the reversible Markov chain in conductances is recurrent, if the group of automorphisms of is transitive. Indeed, assume that the property is true and the VRJP is transient. By Theorem 1, the discrete time process would be represented as a mixture of reversible Markov chains with conductances . From Proposition 2 applied to , we have that

Hence, is equivalently a mixture of Markov chains with conductances

But is stationary ergodic, if is squared integrable, we would have

for some constant . Usual arguments imply that the Markov chain in conductance is recurrent if the Markov chain in conductances is recurrent (cf., e.g., Reference 14, Exercise 2.75). We arrive at a contradiction.

2.7. Organization of the paper

In section 3 we gather several results in the case of finite graphs, in particular we recall the main results of Reference 22. In section 4 we define the important notion of restriction with wired boundary condition and the compatibility property. Section 5 is the key step in the paper where the martingale property is proved. In section 6 we prove Theorem 1, Propositions 2 and 3 and Theorem 2. In section 7 we provide extra computations of -transforms. In section 8 we prove recurrence of ERRW in dimension 2 for all initial constant weights. In section 9 we prove functional central limit theorems for the VRJP and the ERRW, Theorems 3 and 4.

3. The random potential on finite graphs

In this section we assume that is a finite graph and gather several results in this case. Recall that every undirected edge is labeled with a positive conductance . In the case of a finite graph, the Schrödinger operator defined in Equation 2.1 can be represented by the -matrix given by

and the set defined in Equation 2.2 is the set of potentials such that is positive definite.

3.1. The probability distribution on finite graphs, and its relation to the VRJP

We recall Theorem 1 from Reference 22, which defines the probability distribution by its density on any finite graph.

Theorem A (Reference 22, Theorem 1, Definition 1, Proposition 1).

Let be a finite graph with conductances. The measure below is a probability on :

with , and where means that is positive definite.

The Laplace transform of the probability distribution is given, for all , by

Moreover, under we have the following properties:

1-dependence: If are such that , then the random variables and are independent.

Reciprocal inverse Gaussian marginals: For , the random variable has an inverse Gaussian distribution with parameter where .

If we apply formula Equation 3.2 to such that for a subset , we find the expression of Proposition 1. Hence, it implies Proposition 1 in the case of a finite graph.

The field is closely related to the VRJP, as shown in the next two theorems. In Reference 20 it is shown that the VRJP in an exchangeable time scale defined in section 2.1 is a mixture of Markov jump processes; more precisely,

Theorem B (Reference 20, Theorem 2).

Assume finite. The following measure is a probability distribution on the set ,

where and , where the sum is over , the set of spanning trees of the graph .

For , we denote by the law of the Markov jump process starting at vertex and with jump rates from to given by

The law of the VRJP in an exchangeable time scale starting at is a mixture of Markov jump processes, with mixing law given by

Remark 8.

By the matrix-tree theorem, is any diagonal minor of the matrix with coefficients

Remark 9.

The probability measure appeared previously to Reference 20 in a rather different context in the work of Disertori, Spencer, and Zirnbauer Reference 10. In particular, the fact that is a probability measure was proved there as a consequence of a Berezin identity applied to a supersymmetric extension of that measure.

On finite graphs, the random environment of the previous theorem can be represented thanks to the Green function of the random potential distributed according to . Let us first recall Proposition 1 of Reference 22.

Proposition A (Reference 22, Proposition 1).

Assume finite. For , we denote by

the Green function of the Schrödinger operator . For , , we define by

For , is the unique solution of the equation

In particular, the function is -measurable. Moreover, for all ,

As usual, we simply denote by and the associated random variables on the probability space . Let us now recall Reference 22, Theorem 3.

Theorem C (Reference 22, Theorem 3).

Assume finite. For all , under the probability ,

(i)

the random field has the distribution of Theorem B;

(ii)

has a gamma distribution with parameters ;

(iii)

is independent of , hence it is independent of the field .

Remark 10.

Here we only consider the VRJP with initial local time , in fact, the above correspondence between and VRJP still holds for the process starting with any positive local times , in such a case, there is a corresponding density , which is defined in Reference 22, Definition 1 and Theorem 3. We choose here to normalize the initial local time to 1 since it is equivalent to the general case by a change of time and ; see Reference 22, Appendix B.

Combining Theorem B and Theorem C gives a representation of the VRJP in an exchangeable time scale starting from different points in terms of the probability on random potentials . We state this representation below.

Corollary 1.

Assume finite. For , define as the law of the Markov jump process starting from and with jump rates from to given by

Then the VRJP in exchangeable time scale is a mixture of the Markov jump processes

3.2. Representation as a sum on paths

We call path in from to a finite sequence in such that and , for . The length of is defined by . We denote by the collection of paths in from to and by the collection of paths in from to such that . For a path and for , we set

For the trivial path , we define , , . (Note that these definitions make sense also in the case of infinite graphs.)

The following representation of the Green function as a sum on paths will be convenient.

Proposition 6.

Assume that is finite. For all , we have, with the notations of Theorem A,

Proof.

Write for the diagonal matrix with as diagonal coefficients, then . Since , by the Perron–Frobenius theorem, we have that , where is the spectral radius of . Hence, we can write the following convergent expansion,

which exactly corresponds to Equation 3.9.

For the expansion of , note first that . A path in can be cut at its first visit to , turning it into the concatenation of a path in and a path in , and this operation is bijective. It implies that

hence the result.

3.3. A priori estimates on

The following proposition is borrowed from Reference 10, Lemma 3. For convenience, we give a shorter proof of that estimate based on spanning trees instead of fermionic variables, following the proof of the corresponding result for the ERRW; cf. Reference 8, Lemma 7.

Proposition 7.

Let be a finite graph with conductances. Fix a vertex . Let and let be distinct undirected edges such that for all . Then

where is the probability distribution defined in Theorem B.

Proof.

Recall that is defined by

with and where the sum is on spanning trees.

Let ; i.e., is equal to on the edges and is unchanged on the other edges. By assumption, we have on the edges, and for all spanning trees , since edges appear at most once,

which implies From the expression of , we deduce that

It implies that

4. The wired boundary condition and Kolmogorov extension to infinite graphs

4.1. Restriction with wired boundary condition

Our objective is to extend the relations between the VRJP and the field to the case of infinite graphs. To this end, we need an appropriate boundary condition, which turns out to be the wired boundary condition.

Definition 2.

Let be a connected graph with finite degree at each site, and let be a strict finite subset of . We define the restriction of to with wired boundary condition as the graph , where is an extra point and

If is a set of positive conductances, we define as the set of restricted conductances by

Remark 11.

Intuitively, this restriction corresponds to identifying all points in to a single point and to deleting the edges connecting points of . The new weights are obtained by summing the weights of the edges identified by this procedure.

The following lemma is fundamental and is the justification for the choice of this notion of restriction.

Lemma 1.

Let be a finite graph with conductances and let be the associated distribution on random potentials defined in Theorem A. Let be a strict subset of , and let be the restriction of to with wired boundary condition. Let be the distribution of random potential associated with . We denote by and the marginal distributions on of and , respectively. Then

Remark 12.

Note that there is no such compatibility relation with the more usual notion of restriction of graph. The wired boundary condition is fundamental and in fact will be responsible for the extra gamma random variable that appears in the representation of the VRJP on the infinite graph.

Proof.

Taking such that in Theorem A, we get that

Applying Theorem A to the graph with such that , we get

By the definition of these Laplace transforms are equal, hence the marginal distributions are equal.

4.2. Kolmogorov extension: Proof of Proposition 1 and Definition 1

Let be a connected infinite graph with finite degree at each site with conductances . Recall that is an increasing sequence of finite strict subsets of that exhausts ; i.e.,

Let be the restriction of to with wired boundary condition, and let be the restricted conductances. By construction if , then is the restriction of to with wired boundary condition. Lemma 1 implies that the sequence of marginal distributions is a compatible sequence of probabilities. By the Kolmogorov extension theorem, it implies that there exists a probability measure such that

for all integers . By Theorem A, is supported by the set of potentials such that is positive definite for all integers , hence by . It also implies the other properties of .

The solution of the equation defining in Definition 1 exists and is unique since it is equivalent to and

Since is positive definite for , it defines uniquely.

4.3. Coupling lemma: Definition of and relations with , , and

Consider the probability defined in Equation 2.5. It will be convenient to couple the measure and the measure in the following way.

Lemma 2.

For and , we define by

Then, and under , is distributed according to .

Let be the Schrödinger operator associated with , and potential . Let be its Green function. Then,

and, for all ,

where is the field defined in Proposition A for the graph and with the potential .

As usual, we often omit the subscript and write , , , and consider them as random variables on under .

Proof.

Let and . Denote in this proof by the vector defined by

Then, by definition of and , we have and

Since is a vector with positive coefficients, by general results on symmetric -matrices; see Reference 19, Theorem 2.7, p. 141, it implies that . Moreover, it implies that , hence that and .

Finally, by Theorem C, the law of is the same under and , and since is a bijection by Proposition A, it implies that under , has law .

Proposition 8.

With the definition of Proposition 2, for all , all , and all ,

Proof.

For simplicity, we omit the subscripts , in , , in the expression below. By Proposition 6 and Lemma 2, using , we find that

and

Therefore, if we denote the collection of paths on starting from , visiting at least once, and ending at , that is,

then, since and ,

where we used Lemma 2 in the last equality.

5. The martingale property

Recall that is the sub--field generated by the random variables , . The following proposition is the key property for the main theorem.

Proposition 9.

With the notations of Definition 1, for all , has finite moments. Moreover, we have, -a.s.,

and for all ,

Remark 13.

In Theorem B, by the change of variables , the new variables are in the space and the density becomes

We see from this expression that , and hence that

Applied to , , we get which is a particular case of Equation 5.1.

The original proof of that property was rather technical (see the second arXiv version of the present paper). Some time after the first version of this paper was posted on arXiv, a simpler proof of the martingale property Equation 5.1 was given in Reference 7. Moreover, using some supersymmetric arguments, the following more general property was proved.

Lemma 3 (Reference 7).

Let be a nonnegative function on with bounded support. Then

We provide here a different proof of this assertion based on elementary computations on the measures on finite sets. It also provides a simpler proof of the original assertion Proposition 9 by differentiating in .

5.1. Marginal and conditional laws of

In this subsection we suppose that is finite. We state some identities on marginal and conditional laws of the distribution , which will be instrumental in the proof of the martingale property in the next subsection.

Let us first remark that the law defined in Theorem A can be extended to the case where has nonzero, diagonal coefficients. Indeed, if some diagonal coefficients of are positive, then changing from variables to variables , we get the law where is obtained from by replacing all diagonal entries by 0. While it is not very natural from the point of view of the VRJP to allow nonzero diagonal coefficients, it is convenient in this section to allow this possibility since it simplifies the statements about conditional law.

Recall that for any function and any subset , we write for the restriction of to the subset . Similarly, if is a matrix and , , we write for its restriction to the block . We also write to denote integration on variables .

In the next lemma we give an extension of the family of probability distributions . This extension was proposed by Letac, in the unpublished note Reference 13 discussing the integral defined in Reference 22. We give a proof of this lemma using Theorem A.

Lemma 4 (Letac, Reference 13).

Let be finite, and let be a symmetric matrix with nonnegative coefficients. Let be a vector with nonnegative coefficients. Then the following measure on

is a probability distribution, where in the scalar products and is to be understood as the vector . Its Laplace transform is, for any ,

where should be considered as the vector .

It appears in the following lemma that this extension describes all marginal laws of and that the larger family is stable by taking marginals and conditional distributions.

Lemma 5.

Assume that is finite, and let be a subset. Under ,

(i)

is distributed according to where

(ii)

conditionally on , is distributed according to , where and are the matrix and vector defined by

Remark 14.

Note that has nonzero diagonal coefficients.

N.B.

As we can observe, all the quantities with are relative to vectors or matrices on , while the quantities with are relative to vectors or matrices on .

Lemma 6.

Let be a finite connected graph endowed with conductances . Let be a vector with nonnegative coefficients. Let . For , define where , and define , and . For any , we have, a.s.,

where .

Proof of Lemmas 4 and 5.

Lemma 4 and the assertions (i) and (ii) of Lemma 5 are consequences of the same decomposition of the measure . It is partially inspired by computations in Reference 13. We write as a block matrix,

Now, define the Schur’s complement

and

We have

We remark that with notations of Lemma 5(ii), we have

By Equation 5.8, we have

On the other hand, by Equation 5.8 again, we have

therefore, since

we get

Combining Equation 5.9 and Equation 5.11, we have

By Equation 5.8 we also have

Combining Equation 5.12 and Equation 5.13, we have

We remark that the left-hand side is the density of , that the first term of the right-hand side corresponds to the density of and that, being fixed, the second term of the right-hand side is the density of . (Indeed, as remarked above, and , are -measurable.)

Proof of Lemma 4.

Take . Then . Integrating on on both sides of Equation 5.14, with fixed, gives