Finite Euler products and the Riemann Hypothesis

We show that if the Riemann Hypothesis is true, then in a region containing most of the right-half of the critical strip, the Riemann zeta-function is well approximated by short truncations of its Euler product. Conversely, if the approximation by products is good in this region, the zeta-function has at most finitely many zeros in it. We then construct a parameterized family of non-analytic functions with this same property. With the possible exception of a finite number of zeros off the critical line, every function in the family satisfies a Riemann Hypothesis. Moreover, when the parameter is not too large, they have about the same number of zeros as the zeta-function, their zeros are all simple, and they"repel". The structure of these functions makes the reason for the simplicity and repulsion of their zeros apparent and suggests a mechanism that might be responsible for the corresponding properties of the zeta-function's zeros. Computer evidence suggests that the zeros of functions in the family are remarkably close to those of the zeta-function (even for small values of the parameter), and we show that they indeed converge to them as the parameter increases. Furthermore, between zeros of the zeta-function, the moduli of functions in the family tend to twice the modulus of the zeta-function. Both assertions assume the Riemann Hypothesis. We end by discussing analogues for other L-functions and show how they give insight into the study of the distribution of zeros of linear combinations of L-functions.


Introduction
Why should the Riemann Hypothesis be true? If all the zeros of the zeta-function are simple, why? Why do the zeros seem to repel each other? Analytic number theorists believe that an eventual proof of the Riemann Hypothesis must use both the Euler product and functional equation of the zeta-function. For there are functions with similar functional equations but no Euler product, and functions with an Euler product but no functional equation, for which the Riemann Hypothesis is false. But why are these two ingredients essential?
This paper began as an attempt to gain insight into these questions. Section 2 begins with a brief discussion of the approximation of ζ(s) by truncations of its Dirichlet series. In Section 3 we turn to the approximation of ζ(s) by truncations of its Euler product. We show that if the Riemann Hypothesis is true, then short products approximate the zeta-function well in a region containing most of the right-half of the critical strip. Conversely, if the approximation by products is good in this region, the zeta-function has at most finitely many zeros in it. Section 4 is a slight departure from the main direction of the paper, but we include it in order to deduce some imediate consequences of the results of Section 3. In Section 5 we construct a parameterized family, {ζ X (s)}, of functions related to the zeta-function with the same type of approximation property as the finite Euler products. That is, if the Riemann Hypothesis is true, then ζ X (s) is a good approximation of ζ(s) in a region containing most of the right-half of the critical strip and, if ζ X (s) is a good approximation of ζ(s) in this region, then ζ(s) can have at most finitely many zeros there. In Section 6 we show that, with the possible exception of a few low lying zeros, a Riemann Hypothesis holds for each ζ X (s). In Sections 7 and 8, respectively, we prove that on the Riemann Hypothesis, if the parameter X is not too large, then ζ X (s) has about the same number of zeros as ζ(s) and that its zeros are all simple (again with the possible exception of a few low lying ones). We also show unconditionally that when the parameter is much larger, ζ X (s) still has asymptotically the same number of zeros as ζ(s) and that 100% of these (in the density sense) are simple. In the next section we study the relationship between the two functions on the critical line. Assuming the Riemannn Hypothesis, we show that the zeros of ζ X (s) converge to the zeros of ζ(s) as X → ∞ and that between the zeros of the zeta-function |ζ X ( 1 2 + it)| → 2|ζ( 1 2 + it)|. In Section 10 we suggest possible causes for the simplicity and repulsion of the zeros of ζ(s) in light of the structure of ζ X (s). In the last section we illustrate how our results generalize to other L-functions by defining functions L X (s, χ) corresponding to the Dirichlet L-function L(s, χ). We then study the distribution of zeros of linear combinations of L X (s, χ). This suggests a heuristic different from the usual one (of carrier waves) for understanding why linear combinations of the standard L-functions should have 100% of their zeros on the critical line. The appendix provides some useful approximations of the zeta-function by Dirichlet polynomials.
The functions ζ X (s) are simpler than the Riemann zeta-function, yet they capture some of its most important structural features. It therefore makes sense to regard them as models of the zetafunction. The modeling is probably best when X is large, say a power of t. Unfortunately, our results are most satisfactory only for somewhat smaller X ranges, so it would be interesting if one could extend them.
We began by raising several deep questions. We do not offer definitive answers here, just ones that might be suggestive. For example, we shall show that most of the zeros of ζ X (s) are simple and we shall see a mechanism that causes many of them, if not most, to repel. In fact, we shall prove that all the zeros of ζ X (s) (above a certain height) are simple and repel if X is not too large.
Since ζ X (s) mimics ζ(s), this suggests that the zeros of the latter should share these properties.
And concerning the question of why both the Euler product and functional equation are necessary for the truth of the Riemann Hypothesis, one possible interpretation of our results is this: the Euler product prevents the zeta-function from having zeros off the line, the functional equation puts them on it.
I wish to thank Enrico Bombieri, Dimitri Gioev, Jon Keating, and Peter Sarnak for very helpful conversations and communications. I owe an especially great debt to Dimitri Gioev for our stimulating discussions over many months and for the extensive computer calulations he performed.

The approximation of ζ(s) by finite Dirichlet series
Throughout we write s = σ + it and τ = |t| + 2; ǫ denotes an arbitrarily small positive number which may not be the same at each occurrence.
The Riemann zeta-function is analytic in the entire complex plane, except for a simple pole at s = 1 and in the half-plane σ > 1 it is given by the absolutely convergent series Estimating the tail of the series trivially, we obtain the approximation for σ > 1 and X ≥ 1. A crude form of the approximate functional equation (see Titchmarsh [25]) extends this into the critical strip: This holds uniformly for σ ≥ σ 0 > 0, provided that X ≥ C τ /2π, where C is any constant greater than 1. The second term on the right-hand side reflects the simple pole of ζ(s) at s = 1 and, if we stay away from it, it can be ignored. For instance, setting X = t and assuming t ≥ 1, we find that uniformly for σ ≥ σ 0 > 0. Thus, truncations of the Dirichlet series defining ζ(s) approximate it well, even in the critical strip. Now suppose that the Lindelöf Hypothesis is true. That is, that Then the length of the series in (2) and (3) can be considerably reduced, as the following modification of Theorem 13.3 of Titchmarsh [25] shows. (See the Appendix.) Theorem 2.1. Let σ be bounded, |σ| ≥ 1 2 , and |s − 1| > 1 10 . Also let 1 ≤ X ≤ t 2 . A necessary and sufficient condition for the truth of the Lindelöf Hypothesis is that It follows from this that if the Lindelöf Hypothesis is true and we stay away from the pole of the zeta-function at s = 1, then ζ(s) is well approximated by arbitrarily short truncations of its Dirichlet series in the half plane σ > 1 2 . Of course, we saw that this is unconditionally true in the half plane σ > 1.
On the other hand, short sums can not approximate ζ(s) well in the strip 0 < σ ≤ 1/2. For suppose that such a sum and ζ(s) were within ǫ of each other, where ǫ > 0 is small. Then we would Comparing these when σ < 1 2 and again when σ = 1 2 , we obtain a contradiction to (4). This argument is unconditional and shows that one cannot do better then (3) even if the Lindelöf or Riemann Hypothesis is true.
To summarize, ζ(s) is well-approximated unconditionally by arbitrarily short truncations of its Dirichlet series in the region σ > 1, |s − 1| > 1 10 . On the Lindelöf Hypothesis this remains true even in the right-half of the critical strip, 1 2 < σ ≤ 1. However, on and to the left of the critical line, the length of the truncation must be ≈ t. The situation is the same if we assume the Riemann Hypothesis instead of the Lindelöf Hypothesis, since the former implies the latter.

The approximation of ζ(s) by finite Euler products
The zeta-function also has the Euler product representation in the half plane σ > 1, where the product is over all prime numbers. This converges absolutely and it is straightforward to show (take logarithms) that for σ > 1. Here we implicitly use the fact that ζ(s) does not vanish in σ > 1. As is often the case, it is more natural from an analytic point of view to work with weighted approximations, so we will use expressions of the type where Λ(n) is von Mangoldt's function and the weights v(n) will be specified later.
We next ask whether it is possible to extend (5) (or a weighted form of it) into the critical strip in the same way that (2) extended (1). A recent result of Gonek, Hughes, and Keating [10] suggests an answer. It says that if X < t 1−ǫ and X is not too small, then ζ(s) factors in the region σ ≥ 0, |s − 1| > 1 10 as where Z X (s) is a certain product over the zeros of ζ(s). Now, one can show that in the right-half of the critical strip Z X (s) is close to 1 as long as s is not too near a zero of ζ(s). Hence, if the Riemann Hypothesis is true and s is not too close to the critical line, Z X (s) will be close to 1.
(The closer σ is to 1 2 , the larger one needs to take X.) Thus, under the Riemann Hypothesis, an analogue of (5) does hold in the right-half of the critical strip.
To prove these assertions we need an explicit version of (6) that differs slightly from the one given by Gonek, Hughes and Keating [10], and we derive this next.
Also, let denote the second exponential integral, and set We then define where the sum is over all the non-trivial zeros, ρ = β + iγ, of the zeta-function.
Our variant of (6) is With P X (s) and Z X (s) as above we have Proof. We begin with the explicit formula (see Titchmarsh [25], Theorem 14.20) The last term on the right is easily seen to be ≪ X −σ−2 /τ 2 log X, so we have Next we integrate (11) from ∞ to s 0 = σ 0 + it, where σ 0 + it is not a zero of the zeta-function.
We use the convention that if t is the ordinate of a zero ρ = β + iγ and 0 ≤ σ 0 < β, then (12) log ζ(σ 0 + iγ) = lim We also see that Here we use the convention analogous to (12) if t is the ordinate of a zero. It follows that Replacing σ 0 by σ and exponentiating both sides, we obtain the stated result when s is not equal to a zero ρ. If it is, we may interpret the factor in (9) corresponding to ρ as lim ǫ→0 + exp (F 2 (iǫ log X)) .
From the well known formula E 2 (z) = 1/z + log z + e 2 (z), where | arg z| < π and e 2 (z) is analytic in z, it follows that where f 2 (z) is also analytic. Therefore lim ǫ→0 + exp (F 2 (iǫ log X)) = 0 . Since this agrees with the left-hand side of (10), the formula is valid in this case as well.
Before stating our next result we require some notation and a lemma. As usual we write S(t) = (1/π) arg ζ( 1 2 + it) with the convention that if t is the ordinate of a zero, S(t) = lim ǫ→0 + S(t + ǫ). For t ≥ 0 we let Φ(t) denote a positive increasing differentiable function such that and and such that for t sufficiently large we have We call such a function admissible. Note that any function of the type f (t) = (log τ ) α (log log 3τ ) β with α positive satisfies (14). Furthermore, it is easily checked that if Φ satisfies (14), then where the implied constant depends at most on a. It is known that Φ(t) = 1 6 log τ is admissible and, on the Lindelöf Hypothesis, that ǫ log τ is for any ǫ > 0. If the Riemannn Hypothesis is true, then Φ(t) = 1 2 log τ / log log 2τ is admissible. (The constant 1 2 is a recent result due to Goldston and Gonek [8].) Balasubramaian and Ramachandra [5] (see also Titchmarsh [25], pp.208-209 and p. 384) have shown that if Φ is admissible, Φ(t) = Ω( log τ / log log 2τ ), and this is unconditional.
We can now state our lemma. Lemma 3.2. Assume the Riemann Hypothesis. Suppose that Φ(t) is admissible and that σ > 1 2 is bounded. Then we have |γ−t|>∆ Remark. With more care we could show that the first sum equals 1 2 log τ + O Φ(τ )/(σ − 1 2 ) , but we do not require this.
Proof. For the sake of convenience we write σ − 1 2 = a. Recall that N (t), the number of zeros of ζ(s) with ordinates in [0, t], is The left-hand side of (16) is Using (18), we see that the second sum is ≪ a log τ and, for each k, that the sum in parentheses is a .
The proof of (17) is similar.
Integrating (8) by parts, we see that for |z| ≥ 1 and therefore that Since σ ≥ 1 2 + 1 log X and the zeros are of the form ρ = 1 2 + iγ, they all satisfy |s − ρ| log X ≥ 1. Thus, by (24) and Lemma 3.2, the sum in (22) is Also by (24), The first assertion of the theorem follows from this and (10).
The second assertion follows immediately from (19) if σ ≥ 1 2 + 1 log X , so we need only consider the case 1 2 ≤ σ < 1 2 + 1 log X . The terms in the sum in (22) for which |s − ρ| log X ≥ 1 contribute the same amount as before. However, now there may also be a finite number of terms for which |s − ρ| log X ≤ 1. Using (13) to estimate these, we find that if s is not a zero, they contribute |s−ρ| log X≤1 log(4(s − ρ) log X) + O(1) .
Since | arg(s − ρ) log X| ≤ π/2, the imaginary part of this is by (18). This is big-O of the bound in (25) because 1 2 ≤ σ < 1 2 + 1 log X . Thus, we obtain (21) provided that t is not the ordinate of a zero. If it is, the result follows from our convention that arg ζ(σ + it) = lim ǫ→0 + arg ζ(σ + i(t + ǫ)). This completes the proof of the theorem.
We can now deduce an approximation of ζ(s) by Euler products.
Thus, on the Riemann Hypothesis short Euler products approximate ζ(s) as long as we are not too close to the critical line.
We can combine the two assertions of Theorem 3.4 and prove a partial converse as well.
Remark. The condition σ ≥ 1 2 + C log log 2τ log X implies a lower bound for X that grows with t, namely, The converse follows from the observation that if (28) holds, then there is a constant B > 0 such that , this forces γ ≤ exp(B 2/(C−1) ) and the result follows.
As in the case of approximations by short sums, one can also ask whether short products approximate ζ(s) well when 0 < σ ≤ 1 2 . For sums we saw that the answer is no unless they are of length at least t. For products the answer is no no matter how long they are. A quick way to see this is by counting zeros of ζ(s) and of P X (s) in a rectangle containing the segment [ 1 2 , 1 2 + iT ]. The former has ∼ (T /2π) log T zeros, the latter none. This would be impossible if ζ(s) = P X (s)(1 + o(1)) in the rectangle.
One can also argue as follows when σ is strictly less than 1 2 . (A modification of the argument works for σ = 1 2 too.) Suppose that ζ(s) = P X (s)(1 + o(1)) in the strip 0 < σ ≤ 1 2 . Then log |ζ(s)| = log |P X (s)| + o(1) and we have for σ fixed and T → ∞. By the functional equation for the zeta-function, Now the mean-square of the three terms on the right-hand side are ∼ ( 1 2 − σ) 2 T log 2 T, ∼ c 0 T and o(T ), respectively. Thus, On the other hand, by the mean value theorem for Dirichlet polynomials, if X = o(T 1 2 ), the right-hand side of (29) is where c is a positive constant. Comparing this with (30), we see that (29) cannot hold if 0 ≤ σ < 1 2 and X is larger than a certain power of log T . Note also that for infinitely many t tending to infinity, P X (s) can be quite large, namely In this section we have seen that short truncations of its Euler product approximate ζ(s) well in the region σ > 1, |s − 1| > 1 10 . We also showed that this remains true in the right-half of the critical strip if the Riemann Hypothesis is true and if we are not too near the critical line (and use a weighted Euler product). However, to the left of the critical line the Euler product is not a good approximation of ζ(s) regardless of how long it is.

Products, sums, and moments
Our purpose in this section is to deduce two consequences of the results of the previous section.
First we require a result whose proof is given in the Appendix.
Theorem 4.1. Assume the Riemann Hypothesis. Let σ ≥ 1 2 be bounded, |s − 1| > 1 10 , and 2 ≤ X ≤ t 2 . Then there is a positive constant C 1 such that ). For the same σ-range, the approximation of ζ(s) given by Theorem 3.3 is where ǫ is arbitrarily small. Thus, equating respective sides of (32) and (33) and solving for P X (s), we see that By the corollary to Theorem 4.1 (see the Appendix), the sum here is ≪ e C 1 Φ(t) , so we obtain We have now proved for any positive constant C 2 less than C 1 .
Our second observation is that one can use these appoximations to calculate the moments of a very long Euler product. Suppose one wished to compute the moments The standard method would be to write P X (s) k as a Dirichlet series and use a mean value theorem for such polynomials to compute the mean modulus squared. But this only works well when the product does not have many factors. For example, for a slightly different Euler product, Gonek, Hughes and Keating [10] have proved the Theorem. Let 1/2 < c < 1, ǫ > 0, and let k be any positive real number. Suppose that X and Note that here the number of factors in the Euler product is not even log 2 T . On the other hand, if we assume the Riemann Hypothesis, that 1 2 + C log log 2T log X ≤ σ < 1 with C > 1, and that 2 ≤ X ≤ T 2 , then by Theorem 3.4 Hence, Now, it is a consequence of the Lindelöf Hypothesis (Titchmarsh [25], Theorem 13.2), and so also of the Riemann Hypothesis, that when 1 2 < σ < 1 is fixed, for any fixed positive integer k. Thus, for such σ and k we have This gives an estimation of the moments of an extremely long Euler product deep into the critical strip.

A function related to the zeta-function
In Section 2 we showed that short truncations of its Dirichlet series approximate ζ(s) in σ > 1 and that, if the Lindelöf Hypothesis is true, this also holds in σ > 1 2 . The approximation cannot be good in the strip 0 < σ ≤ 1 2 unless the length of the sum is of order at least t; and this is so even if we assume the Lindelöf or Riemann Hypothesis. In Section 3 we showed that the situation is similar, up to a point, when we approximate ζ(s) by the weighted Euler product P X (s): short products approximate ζ(s) well in the half-plane σ > 1 unconditionally, and in the strip 1 2 < σ ≤ 1 on the Riemann Hypothesis. However, the approximation cannot be close in 0 < σ < 1 2 no matter how many factors there are, for P X (s) gets much larger than ζ(s) in this strip (see (31)).
We now reexamine the approximation of ζ(s) by sums when σ is close to 1 2 . If we assume the Riemann Hypothesis, then by (32) This is good for X a small power of t as long as σ is not too close to 1 2 , but we know X has to be of order t on σ = 1 2 . This means the approximation is off by about X<n≤t n −s . The Hardy-Littlewood approximate functional equation [13] (or see Titchmarsh [25]), gives us another way to express this. It says that where 0 ≤ σ ≤ 1, |s − 1| ≥ 1 10 , and χ(s) is the factor in the functional equation From this we see that the amount by which the sum n≤X n −s is off from ζ(s) is about If we Let X = |t|/2π and note that |χ( 1 2 + it)| = 1, we find that on the critical line ζ(s) is essentially composed of two pieces of equal size: In the case of Euler products, Theorem 3.4 suggests that P X (s) approximates ζ(s) well even closer to the critical line than a sum of length X does. For P X (s) is a good approximation when logX , while the sum is only close (as far as we know) when σ ≥ 1 2 + 2C 1 Φ(t) logX . In light of this and (34) it is tempting to guess that for some unspecified X. However, this is not a good guess. For we have seen that P X (1 − s) gets as large as exp (X σ− 1 2 / log X) when σ > 1 2 , whereas n≤X n s−1 is no larger than X σ− 1 2 e C 1 Φ(t) by Corollary 12.
The difficulty here is crossing the line σ = 1 2 , where there is a qualitative change in the behavior of the zeta-function. A way around this is to use the fact that the functional equation tells us the zeta-function everywhere once we know it in σ ≥ 1 2 . If we restrict our attention to this half-plane, a reasonable alternative to (35) is Note that on the critical line, ζ X (s) and the right-hand side of (35) are identical. Also, since Combining this observation with Theorem 3.5, we obtain Theorem 5.1. Assume the Riemann Hypothesis. Let 2 ≤ X ≤ t 2 , |s−1| ≥ 1 10 , and 1 2 + C log log 2τ Conversely, if (36) holds for 2 ≤ X ≤ t 2 in the region stated, then ζ(s) has at most a finite number of zeros to the right of σ = 1 2 + C log log 2τ log X .
Thus, even though ζ X (s) is not analytic, it approximates ζ(s) well to the right of the critical line. It resembles the zeta-function closely in other ways too as we shall see. 6. The Riemann Hypothesis for ζ X (s) For a closer study of ζ X (s) we require several properties of the chi-function which appears in the functional equation of the zeta-function. Chi has simple poles at s = 1, 3, 5, . . .
from the Γ-factor in the numerator. If we stay away from these, in any half-strip −k < σ < k, t ≥ 0, by Stirling's approximation. When t < 0, χ(s) is given by the conjugate of this. We note for later use that the O-term is differentiable.
The converse is also almost true.
Remark. One can take C 0 < 6.3, but we do not require this.
Proof. Taking the logarithmic derivative of (37) by means of Cauchy's integral formula, we find This is negative for all t sufficiently large (independently of σ 1 ), so the result follows. The proof is From now on C 0 will denote the constant in Lemma 6.1.
We now prove Proof. Since ζ X (s) = P X (s) + χ(s)P X (s) and P X (s) never vanishes, the zeros of ζ X (s) can only occur at points where |P X (s)| = |χ(s)P X (s)|, that is, where|χ(s)| = 1. The result now follows from Lemma 6.1.
7. The number of zeros of ζ X (s) In this section we estimate the number of zeros of ζ X (s) up to height t on the critical line and show, among other things, that it has at least as many zeros (essentially) as ζ(s) does, namely An exact expression is and we shall use this later. Here the argument of χ is determined by starting with the value 0 at s = 2 and letting it vary continuously, first along the segment from 2 to 2+ it, and then horizontally from 2 + it to 1 2 + it. To investigate the zeros of ζ X (s) we write (40) ζ X (s) = P X (s) 1 + χ(s) P X (s) P X (s) .
Before turning to our first estimate, we point out that, as with arg χ( 1 2 + it), arg P X ( 1 2 + it) is defined by continuous variation along the segments [2, 2 + it] and [2 + it, 1 2 + it], starting with the value 0 at s = 2. Also note from (41) that F X (t) is infinitely differentiable for t > 0.
We now prove Proof. There are at most finitely many zeros (the number may depend on X) with ordinates between 0 and C 0 (the constant in Lemma 6.1). We may therefore assume that t ≥ C 0 . Now, by (37) and by (7) arg P X ( 1 2 + it) = Im Thus, we may express F X (t) in (41) as Recall that ζ X ( 1 2 +it) = 0 when t ≥ C 0 if and only if F X (t) ≡ π(mod 2π). Since F X (t) is continuous, this happens at least The sum over prime powers in (42) obviously plays an important role in producing zeros of ζ X (s).
This sum is just − arg P X ( 1 2 + it), but it will be convenient to give it a simpler name. Thus, from now on we write As is well known (see Selberg [24] or Titchmarsh [25]), −(1/π)f X (t) is a good approximation in mean-square to S(t) = (1/π) arg ζ( 1 2 + it) if X is a small power of t and if the Riemann Hypothesis holds. However, a closer analogue of S(t) is From (40) we see that The second term on the right, which contains the jump discontinuities of S X (t) as t passes through zeros of ζ X ( 1 2 + it), has modulus ≤ 1 2 . (Note that our convention is that the argument is π/2 when 1 + e −iF X (t) vanishes). Thus, S X (t) and −(1/π)f X (t) differ by at most O(1).
The next theorem shows that when X is not too small, f X (t) and S X (t) have the same bound as S(t), namely Φ(t).
From Theorem 7.2 and Theorem 7.1 we immediately obtain Theorem 7.3. Assume the Riemann Hypothesis is true. Then for 2 ≤ X ≤ t 2 , Moreover, if exp(c log τ /Φ(t)) ≤ X ≤ t 2 , where c is any positive constant, then .
To obtain an upper bound for N X (t) of the same order we require the following theorem.
Theorem 7.4. Assume the Riemann Hypothesis and that 2 ≤ X ≤ t 2 . Then Proof. Taking the real part of (11) with σ = 1 2 , we obtain We substitute this into the left-hand side of (44) and rearrange and find that In the sum over zeros, the terms with |t − γ| ≥ 1 contribute ≪ log τ / log X . Thus, writing C(v) = cos(v log X) − cos(2v log X), we have To estimate the sum on the last line, first note that and that In particular, it follows that Using (47), we see that the integral with respect to dS is, The other integral is The O-term contributes by (46). Thus, combining these results, we find that To calculate the integral we write By (46) the second integral is O(1). By the calculus of residues and the definition of C(v), the first Thus, Using this in (48), we obtain It therefore follows from (45) that This completes the proof of the theorem.
The zeros of ζ X (σ + it) with t ≥ C 0 arise as the solutions of F X (t) ≡ π(mod 2π), and their number in [C 0 , t] is at least (1/2π) F X (t) + O X (1) because this is the minimum number of times the curve y = F X (t) crosses the horizontal lines y = π, 3π, 5π, .... However, there could be "extra" solutions if F X (t) is not monotone increasing. Now By Theorem 7.4 there exists a positive constant C 3 , say, such that if X ≤ exp(C 3 log τ /Φ(t)) and t is large enough, then (50) This means F ′ X (t) is positive, so F X (t) ≡ π(mod 2π) has no extra solutions. We have therefore proved Theorem 7.5. Assume the Riemann Hypothesis. There is a constant C 3 > 0 such that if X < exp C 3 log t/Φ(t) , then Less precisely, It would be interesting and useful to know whether (51) (perhaps with a larger O-term) also holds when X is a small fixed power of t. If that is the case, classical results about the statistics of the zeros of ζ(s) whose proofs depend on approximating S(t) by the trigonometric polynomial −(1/π)f X (t) would hold for the zeros of ζ X (s) as well. What we can show for larger X is the following unconditional result.
Theorem 7.6. There exists a positive constant C 4 such that if X ≤ t C 4 , then Proof. There are two ways solutions to F X (t) ≡ π(mod 2π) may arise, and we shall refer to the zeros of ζ X (s) corresponding to these two ways as zeros of the "first" and "second" kind.
The first way is by F X (t) increasing or decreasing from one odd multiple of π, say (2k + 1)π, to the next larger or smaller odd multiple of π, without first crossing (2k + 1)π again. A moment's reflection reveals that the total number of distinct zeros in [C 0 , t] arising this way is big-O of the total variation of F X (t), namely, 1 2π By (49) and the triangle and Cauchy-Schwarz inequalities, this is By a standard mean value theorem for Dirichlet polynomials it is easy to show that if X ≪ t 1/2 , the integral is ≪ t log 2 X . Thus, writing N I (t) for the number of distinct zeros that occur in this way, we have We will see how to take multiplicities into account below.
The second way solutions to F X (t) ≡ π(mod 2π) can occur is by F X (t) increasing or decreasing from a solution (2k + 1)π and returning to this value before reaching the next larger or smaller odd multiple of π. Each time this happens, there must be at least one point in between where F ′ X (t) vanishes. Thus, writing N II (t) for the number of distinct zeros of ζ X (s) arising this way, we see that N II (t) is at most big-O of the number of times F ′ X (t) vanishes on [C 0 , t]. To estimate this number we define functions Here we use the principal branch of logarithm on the complex plane with the negative real axis zeros in the disc of radius 2 3 R centered at z 0 . To apply this to the segment [ 1 2 + it, 1 2 + 2it], say, we need a disc containing it, the maximum of |G X (s)| on this disk, and a lower bound for |G X | at the center of the disk. We handle the last problem first by selecting as center a point at which we know |G X (s)| cannot be too small. The upper bound for N II (t) will follow by repeating this process for each of the segments and adding the resulting estimates.
To show that one can find a satisfactory center, fix a δ with 0 < δ < 1 2 and set Thus, the measure of E(t) is It follows that there exists a constant )t] has greater length than the set E(t), it contains a point 1 2 + it 0 with t 0 not in E(t), and therefore with |G X ( 1 2 + it 0 )| > 2δ log(t/2π). We now let D 0 (t) be the closed disc of radius t centered at 1 2 +it 0 , and let M denote the maximum of |G X (s)| on D 0 (t). Clearly on D 0 (t) we have Hence, by the theorem alluded to above G X (s), has ≪ log(X t+1/2 /δ log t) ≪ t log X zeros inside the smaller disc D 0 ( 2 3 t) of radius 2 3 t, which covers [ 1 2 + it, 1 2 + 2it]. Adding estimates for the different intervals, we arrive at ≪ t log X distinct zeros of F ′ X (t). The same bound therefore holds for the number of distinct zeros of the second kind.
Combining the two ways the solutions of F X (t) ≡ π (mod 2π), or zeros of ζ X (s) arise, we find that for X ≤ t C 4 there are has multiplicity m if and only if the first m − 1 derivatives of ζ X ( 1 2 + it) with respect to t vanish at γ X , but the m th does not. It is easy to check that this is equivalent to F X (γ X ) ≡ π (mod 2π), F ′ X (γ X ) = ... = F (m−1) X (γ X ) = 0, and F (m) X (γ X ) = 0. Also note that our estimate for the number of zeros of the analytic function G X (s) counts them according to their multiplicities, and that F ′ X (t) = G X ( 1 2 +it), F Suppose then that 1 2 + iγ X is a zero of ζ X (s) of the first kind and multiplicity m. Then it is counted once in N I (t). Also, since the first m − 1 derivatives of F X (t) vanish at γ X , so does G X ( 1 2 + it) and its first m − 2 derivatives. Thus, 1 2 + iγ X is counted another m − 1 times in N II (t), and therefore with the correct multiplicity in N I (t) + N II (t).
Next suppose that 1 2 + iγ X is a zero of multiplicity m of the second kind. Then it is counted at least once in N II (t) because F ′ X (t) = G X ( 1 2 + it) vanishes at a nearby point. Also, at γ X itself we have F ′ X (γ X ) = · · · = F m−1 X (γ X ) = 0, and F m X (γ X ) = 0. This means that G X (s) and its first m − 2 derivatives are zero at 1 2 + iγ X , so this point is counted m − 1 times by N II (t). Thus, zeros of the second kind with multiplicity m are counted with weight at least m in N II (t).
We now see that Both assertions of the theorem now follow from this and the lower bound in (43).

The number of simple zeros of ζ X (s)
We saw in the last section that a zero 1 2 (49) and (50) we see that F ′ X (γ X ) > 0 if X is not too large, and therefore that 1 2 + iγ X is a simple zero of ζ X (s). Combining this with Theorem 7.5, we obtain Theorem 8.1. Assume the Riemann Hypothesis. There exists a constant C 3 > 0 such that if X < exp C 3 log t/Φ(t) , then all the zeros of ζ X ( 1 2 + it) with t ≥ C 0 are simple and N (1) As in our results for N X (t), the condition on X is almost certainly too restrictive. The following unconditional but less precise result is valid for larger X.
Proof. Let N be the number of zeros of ζ X ( 1 2 + iu) in [t, 2t] and N * the number of these that are multiple. By Theorem 7.6, there is a constant C 4 such that if X ≤ t C 4 , then N ≪ t log t. We may therefore split the N * multiple zeros into K ≪ log t sets S 1 , S 2 , ..., S K , in each of which the points are at least 1 apart. Let S be one of these sets and let γ 1 , γ 2 , ...γ R be its points. Then these must all satisfy we have by a mean value theorem of Davenport (Montgomery [20]) It is not difficult to show that the sum on the right is ≪ 2k log 2k X so, if X ≤ t 1/2k , the right-hand side is ≪ k t log 2k+2 X. (We also require X ≤ t C 4 , so we assume that k ≥ C 4 /2.) On the other hand, by (8) the left-hand side must be ≫ R log 2k t. Therefore |S| = R ≪ k t(log 2k+2 X/ log 2k t). There are K ≪ log t sets S k , so the total possible number of multiple zeros is N * ≪ k t(log 2k+2 X/ log 2k−1 t).
Taking k large enough so that 1/(k + 1) < ǫ, we obtain the result. 9. The relative sizes of ζ X (s) and ζ(s) and the relation between their zeros Although we have not proved that ζ X (s) approximates ζ(s) pointwise when σ is very close to 1 2 , the similarity between the formulae for N X (t) and N (t) suggests there might be a close relationship between the two functions even on the critical line. Indeed, comparing the graphs of |ζ X ( 1 2 + it)| and |ζ( 1 2 + it)| for a wide range of X and t (see Figures 1 and 2), one is struck by two things: (1) the zeros of ζ X ( 1 2 + it) are quite close to those of ζ( 1 2 + it), even for relatively small X, and (2) as X increases, |ζ X ( 1 2 + it)| seems to approach 2|ζ( 1 2 + it)|.
An explanation for the second observation is that although ζ X (s) approximates ζ(s) to the right of the critical line, so does P X (s). Therefore ζ X (s) = P X (s) + χ(s)P X (s) might be a closer approximation to the function F(s) = ζ(s) + χ(s)ζ(s) than to ζ(s). If this is the case, then on the critical line we have by the functional equation that To establish both observations rigorously we need to introduce a slightly modified version of ζ X (s). Let P * X (s) = P X (s) exp (−F 2 ((s − 1) log X)) and define ζ * X (s) = P * X (s) + χ(s)P * X (s) .
Note that by (26), when σ ≥ 1 2 and |s − 1| ≥ 1 10 . The difference between P X (s) and P * X (s), and so also ζ X (s) and ζ * X (s), is small when 2 ≤ X ≤ t 2 , as we have been assuming till now. We need to take X much larger, though, in what follows. Similarly, we replace F X (t) by By (41) and (26) so these two functions are also close when X ≤ t 2 . The zeros of ζ * X ( 1 2 + it) are the solutions of F * X (t) ≡ π (mod 2π) and we will show that (1) and (2) above hold provided we use ζ * X ( 1 2 + it) in place of ζ X ( 1 2 + it). Assume the Riemann Hypothesis is true. Taking the argument of both sides of (10) and recalling that S(t) = (1/π) arg ζ( 1 2 + it), we see that (53) where γ runs through the ordinates of the zeros of ζ(s). We use this to replace the quantity in parentheses in (52) and obtain Now, by (39) We use this first to show that the zeros of ζ * X (s) cluster around the zeros of ζ(s) as X → ∞. Let γ and γ ′ denote ordinates of distinct consecutive zeros of ζ(s), and set ∆ = |γ − γ ′ |. Also, fix an ǫ with 0 < ǫ < 1/4 and let I = [γ + ǫ∆, γ ′ − ǫ∆]. Then by (24) and Lemma 3.
uniformly for t ∈ I. It now follows from (54) that given any δ > 0, there exists an X 0 = X 0 (γ, ǫ, δ) such that if X ≥ X 0 , then uniformly for t ∈ I. Here ||x|| denotes distance to the nearest integer. Since N (t) is an integer when t ∈ I, this means that if δ < 1 2 , then 1 2 + it is not a zero of ζ * X (s). Thus I is free from zeros of ζ * X (s) when X is sufficiently large. Now we show that |ζ * X ( 1 2 + it)| tends to 2|ζ( 1 2 + it)| on I. By (9) and (52) we may write Also, by Theorem 3.1 and the definitions of P * X and Z X , we have From the first of these we see that if (56) holds with δ sufficiently small, then uniformly for t ∈ I. From the second and (55) we see that if X is large enough, then on I. Thus, |ζ * X ( 1 2 + it)| → 2|ζ( 1 2 + it)| as X → ∞ uniformly for t ∈ I. Combining our results we now have Theorem 9.1. Assume the Riemann Hypothesis. Let γ and γ ′ denote ordinates of distinct consecutive zeros of the Riemann zeta-function, and let I denote a closed subinterval of (γ, γ ′ ). Then for all X sufficiently large ζ * X ( 1 2 + it) has no zeros in I. Moreover, |ζ * X ( 1 2 + it)| → 2|ζ( 1 2 + it)| as X → ∞ uniformly for t ∈ I.
I hope to give a more complete analysis of the approximations above in a subsequent article.
E. Bombieri has pointed out to me that (53) is closely related to an explicit formula of Guinand ([11], [12]), namely, There is no sum over zeros here because Guinand is taking a limit. Also, the Λ(n) are unweighted.
However, this is only a minor difference.
It is remarkable that the zeros of ζ * X (s) and ζ X (s) are close to those of ζ(s) (Figures 1 and 2) even when X is small. Formula (54) offers a possible explanation for this. Suppose that t = γ, the ordinate of a zero of ζ(s) with multiplicity m. Then N (γ) is an integer, so by (54) Now, a more precise version of (13) is that if y is real, where the a k are real and the argument is π/2 when y = 0 (the limit as y → 0 + ). This is an odd function (for y = 0). Furthermore, for larger y we have Im F 2 (iy) = sin y y 2 (1 + O(1/|y|)) by (23). Thus, the m terms in the sum with γ ′ = γ contribute m π/2, and the terms with |γ − γ ′ | log X large are decreasing and oscillating. It might also be the case that small and intermediate range terms cancel out to a large degree because Im F 2 (iy) is odd and we expect the γ ′ s to be somewhat random. If this is so, then (1/2π)F * X (γ) will be close to m/2 (mod 1). Thus, if m is odd (it is believed that m always equals 1) it would not be surprising to find a zero of ζ * X ( 1 2 + it) nearby.
While writing this paper, I learned from a lecture by J. P. Keating that he and E. B. Bogomolny had worked with a function similar to ζ t/2π restricted to the critical line as a heuristic tool to calculate the pair correlation function of the zeros of ζ(s) (see, for example, Bogomolny and Keating [4] and Bogomolny [1]). In fact Professor Keating [18] had first considered such a function in the early 90s and observed that its zeros are quite close to those of the zeta-function. He and his graduate student, Steve Banham, also heuristically investigated how close the zeros of ζ X ( 1 2 + it) and ζ(s) are as a function of X.

10.
Why are the zeros of ζ X (s) simple and why do they repel?
The construction, properties, and graphs of the functions ζ X (s) suggest that they model the behavior of ζ(s), particularly with regard to the position of zeros. Therefore, explanations of why the zeros of ζ X (s) are simple and repel each other could shed light on why the zeros of ζ(s) have these same properties.
Theorem 8.1 shows that if the Riemann Hypothesis holds, then the zeros of ζ X ( 1 2 + it) with t ≥ C 0 are simple provided that X ≤ exp(C 3 log t/Φ(t)) for some constant C 3 > 0. Futhermore Theorem 8.2 shows unconditionally that even for X as large as exp(o(log 1−ǫ t)), 100% of the zeros are simple. The structure of ζ X ( 1 2 + it) suggests why. The zeros of ζ X ( 1 2 + it) for t ≥ C 0 are the solutions of the congruence F X (t) ≡ π (mod 2π). In other words, they are the t-coordinates of the points where the curve y = F X (t) crosses the equally spaced horizontal lines y = (2k + 1)π. If such a t is to be the ordinate of a multiple zero of , it also has to be a solution of the equation F ′ X (t) = 0. We saw that this cannot happen for X ≤ exp(C 3 log t/Φ(t)) and that it cannot happen often if log X = o(log τ ). But clearly, even for X a power of t this should happen rarely, if ever.
What about repulsion? By (49) and Theorem 7.4, F ′ X (t) ≪ Φ(t) log X + log t when 2 ≤ X ≤ t 2 . As in Section 7, we divide the zeros into two kinds. The first kind come about by y = F X (t) increasing or decreasing from y = (2k + 1)π to the next larger or smaller odd multiple of π without first re-crossing y = (2k + 1)π. All other zeros are zeros of the second kind. Suppose that γ X andγ ′ X are ordinates of consecutive zeros of ζ X ( 1 2 + it), and 1 2 + iγ X is a zero of the first kind. Then and X = 300, respectively. Thus, Recall that (log γ X / log log γ X ) 1/2 ≪ Φ(γ X ) ≪ log γ X . Thus, if X ≤ γ 2 X , then for some a ∈ [1, 3 2 ]. Note that if X ≤ exp(C 3 log t/Φ(t)) with C 3 as in Theorem 8.1, then F ′ X (t) > 0 and all zeros are of the first kind. Furthermore, by the proof of Theorem 7.6, ∼ (t/2π) log(t/2π) of the zeros are of the first kind when log X = o(log τ ).
If 1 2 + iγ X is a zero of the second kind, then F X (γ ′ X ) − F X (γ X ) = 0 and the argument above does not work. It may be, however, that this does not happen often, that is, that most zeros are of the first kind.
The heuristic argument at the end of the last section suggesting that γ small away from ordinates of zeta zeros, when applied to (53) with X ≤ t 2 , implies that between ordinates Of course, it is not clear how large X should be relative to t. However, graphs of f X (t) indicate that they are close to the graph of S(t) when X is moderately large, there are small oscillations along the downward slopes of S(t), and then a flatter, not necessarily vertical, rise near the jumps of S(t) (Figure 3). For which approximates N (t) − 1, this means that the oscillations tend to be along the flat part of the "steps" and not at the rise ( Figure 4). However, zeros of ζ X ( 1 2 + it) correspond to solutions of F X (t)/2π ≡ 1 2 (mod 1), and these will be abscissae of points that are about half-way up the rise of F X (t). This would suggest that zeros of the second kind are unlikely.
Our arguments have assumed that X ≤ t 2 , but we do not know whether this is appropriate for imitating the zeta-function in this context. If not, we could repeat the arguments with F * X (t). This would introduce the term −Im F 2 (− 1 2 + it) log X , which can be as large as X/t 2 log 2 X. Applied to the argument for gaps between zeros of the first kind with ordinates around t, and assuming X is a power of t greater than 2, this leads to γ ′ X − γ X ≫ 1/γ b X for some positive b in place of (57). The repulsion between zeros of the zeta-function obtained by extrapolating from Montgomery's pair correlation conjecture predicts that 11. Other L-functions and sums of L-functions We now define the functions Observe that P X (s, χ) = P X (s, χ). Clearly theorems corresponding to those we have proved for ζ X (s) hold for L X (s, χ). In particular, one can show that all zeros in 1 part ≤ C 0 . Further, if the Riemann Hypothesis holds for L(s, χ), then N X (t, χ), the number of when 2 ≤ X ≤ t 2 , and equality holds if X is much smaller. (We assume q is fixed.) Also, unconditionally we have N X (t, χ) = (1 + o(1))(t/2π) log(qt/2π) provided log X = o(log τ ).
A number of authors ( [2], [3], [14], [23] The idea leading to these results was first suggested by H. Montgomery and is roughly as follows. Consider the case of two distinct Dirichlet L-functions L(s, χ 1 ) and L(s, χ 2 ) to the same modulus q and having the same functional equation. One can show that f 1 (t) = log |L(s, χ 1 )|/ √ π log log t and b j e iα j P X (s, χ j ) =P X (s) + Ψ(s)P X (s) .
For the moment let us pass over the first case and count the number of points at which the second case happens but the first does not. By (59) arg Ψ(s) = −t log tq 2π with c 0 a real number. Thus, (2) happens at least times on [0, t]. Here we define arg P X ( 1 2 + it) by continuous variation from some point σ 0 > 1 on the real axis up to σ 0 + it and then over to 1 2 + it, with our usual convention if P X vanishes at Since 0 ≤ Λ X (n) ≤ Λ(n), we see that for σ > 1 In particular, 0 ≤ a(n) ≤ 1 and a(1) = 1. Re (e −iω P X (s)) ≥ c This is positive if σ > 1 + 1/c 1 . Thus, if σ 0 meets this condition, Re (e −iω P X (σ 0 + it)) > 0 for all t, and arg P X (σ 0 + it) varies by at most π on [σ 0 , σ 0 + it]. It follows that | arg P X ( 1 2 + it)| is less than or equal to the change in argument of P X (s) on the segment [ 1 2 + it, σ 0 + it] plus π. By a well known lemma in Section 9.4 of Titchmarsh [25] Thus, This is a very crude bound but it suffices here. By (11), we now have and the leading term is larger than the O-term if X < t 1−2ǫ . To leading order this is also the lower bound for the number of zeros of each L X (s, χ). With more work we could show unconditionally that when log X/ log τ = o(1), the number of zeros arising from case (2) is in fact = (1 + o(1))(t/2π) log(t/2π).
An analysis of the contribution of zeros from case (1) is rather elaborate and we will not attempt it here. One expects relatively few zeros to arise in this way, though, because it is unlikely that the curve z = P X ( 1 2 + it) will pass through the origin. As with our previous results, the difficulty we have is not to prove that there are lots of zeros on the line, but that there are not too many, and, just as before, we have only limited success with this.
The main point I wished to illustrate here is that one can see immediately from the structure of L X (s, χ) why one might expect 100% of the zeros of linear combinations of such functions to lie on the critical line. It therefore suggests a reason this should be true for linear combinations of actual L-functions, and this reason is different from the usual one.

Appendix
Theorem. A necessary and sufficient condition for the truth of the Lindelöf Hypothesis is that for Moreover, if the Riemann Hypothesis is true, then there exists a positive constant C 1 such that for X and s as above, Here Φ(t) is an admissible function in the sense of Section 3. In particular, we have Proof. The proof of a statement similar to the first assertion may be found in Titchmarsh [25] (Theorem 13.3). Moreover, the more difficult implication (Lindelöf implies (60)) is proved by an easy modification of the proof of the second assertion, which we turn to now.