Skip to Main Content

Classical Values of Zeta, as Simple as Possible But Not Simpler

Olga Holtz

Communicated by Notices Associate Editor Emilie Purvine

1. History in a Nutshell

The notorious Basel problem, posed by Pietro Mengoli in 1650, asked for the precise evaluation of the infinite sum

which a modern reader would instantly recognize as the value of the celebrated Riemann Zeta function at the point . This problem was solved by Leonhard Euler by 1734, with his solution read at the Saint Petersburg Academy of Sciences on December 5, 1734.

Euler’s investigation of this and related problems began in 1731 and continued at least till 1749. Euler’s tour de force led to his solution or, in fact, multiple solutions to the Basel problem, followed by his successful evaluation of at all positive even integers and at all negative integers, his infinite product representation for , and his lesser-known formulation of its functional equation, albeit only with a partial proof. The fascinating history of Euler’s work on this subject is laid out in 2.

About a century later in his groundbreaking memoir 16 (for an English translation, see 5) Bernhard Riemann re-introduced as a function of a complex variable , established a number of its integral representations, found and justified its analytic continuation beyond the half-plane , proved its functional equation, and formulated the Riemann hypothesis. Riemann’s memoir must have been the most influential nine pages of mathematics ever written.

2. Teaching Orthodoxy and Heterodoxy

These days, undergraduate students are introduced to the values of the -function at positive even integers and all nonpositive integers (which I will call ‘classical values’ here) roughly as follows. First, the value is derived, usually using a partial fraction decomposition of the cotangent function, which, in turn, is justified via the so-called Herglotz trick 1 or by using residues if the students are familiar with complex analysis. The same method also yields the values of at all positive even integers.

The -function of a complex variable is then defined for using one of its integral forms and continued outside that half-plane. This continuation is usually performed using the Jacobi theta function and its functional equation, which, in turn, requires a proof of Poisson summation, a mysterious formula with a demanding proof for undergraduates. Once a symmetric integral form of is proved, its functional equation follows. Then the values of at negative integers are obtained using the functional equation. If the students are not familiar with complex analysis, the key facts of this derivation are merely stated.

This derivation of all ‘classical’ values of is lengthy, and, to my mind, not particularly edifying.

The goal of this note is to present two simple ways to obtain all classical values of . Both arguments will proceed in the order I wish I had learned as an undergraduate myself: from evaluating at all nonpositive integers, to its functional equation, to its evaluation at all even positive integers. I will argue below that this ‘counterintuitive’ order is in fact very ‘natural’. I will also endeavour to strip each of its steps from all complexity.

The first way is described in Section 3 following Riemann’s approach in his memoir 16. This approach requires only basic complex analysis, no Poisson summation formula, and, in fact, no Jacobi theta function. The second way presented in Section 4 streamlines Euler’s approach in his series of works 6789101112. This approach does not even require complex analysis, only results and techniques available to Euler in 1730–1750.

Interestingly, apart from exactly one sentence about the values of being zero at negative even integers, Riemann himself did not even address the classical values of in his memoir – or in his Nachlass, according to Siegel 417. Nevertheless, I have no doubt that the approach spelled out in Section 3 below was clear to Riemann. I am equally certain that the simplified version of Euler’s arguments presented in Section 4 would have been clear to Euler.

Thus, I claim no results below as new. However, their arrangement must be new.

The first impetus for this note was my dismay at my own arduous answer to the question “How to evaluate at all classical points?” once asked by a curious undergraduate. The second impetus was my disturbing realization, while answering this question, that I did not really know (!) why the Bernoulli numbers arise at the integer values of . This is my attempt to redeem myself.

One more thing before we begin. The reader might wonder how much Riemann actually knew about Euler’s work on ‘his’ -function. Riemann’s expert knowledge of Latin and French (among other languages) and his remarkable ability to absorb mathematical knowledge by quick reading are well documented 15. This, together with an assurance of André Weil given to Raymond Ayoub as reported in 2, makes me very certain that Riemann was intimately familiar with Euler’s research on the -function, most likely including Euler’s results on its functional equation. I should add that André Weil also made an intriguing conjecture about a possible influence of Eisenstein’s unpublished proof of the functional equation for the Dirichlet -function modulo on Riemann’s epic paper 16, along with a plausible explanation how Eisenstein’s copy of Gauss’s Disquisitiones with that proof written as an annotation reached the mathematical library in Giessen (see 1819).

3. Riemann’s Way

3.1. Heart of the matter

Riemann’s starting point is the following observation based on a change of variables:

with the last equality arising from the integral definition of the Gamma function due to Euler.

Riemann sums such integrals from to , obtaining

In a striking turn, Riemann then switches the sign of in the numerator to and the contour of integration from the half line (or ) to the so-called Hankel contour, which I will denote by . Incidentally, Riemann could have flipped that contour around the origin instead, sending it to infinity leftwards, but he proceeds as above, obtaining

Figure 1.

The Hankel contour .

Graphic without alt text

This integration presupposes that the (usually) multi-valued function is classically defined, so that the logarithm of is real when is negative. The point of this clever trick is to ‘extract’ the value of for any complex value from the integral 1:

In fact, just these few lines of Riemann’s memoir provide a key to the -values at all nonpositive integers. Indeed, if is a nonpositive integer, the function is single-valued and meromorphic with the only pole at the origin. So the contour can be deformed into a loop around the origin without changing the value of the integral 1. That value can be now obtained, by Cauchy’s residue theorem, from the coefficient of in the Laurent series of the integrand, i.e., from the coefficient of in the Taylor-Maclaurin series for . (Remember that is a nonnegative integer.) And the latter series was already computed by Euler (of course!), who showed that

where denotes the sequence of Bernoulli numbers.

A remaining technicality is to make sense of the value , which appears in the right-hand side of 2. Since the Gamma function has poles at all nonpositive integers, this is an indeterminate of the sort , but it can be resolved simply, using the defining multiplicative property of the Gamma function times:

Recalling that the integrand of 1 contains the sign , we now get

and hence

This formula holds for all nonpositive integers .

Riemann in 16 immediately proceeds to note that the integration in 1 can be turned ‘inside out’ whenever . This amounts to augmenting the Hankel contour with a big circle , whose contribution to the integral will be zero when . Riemann does not elaborate on this point, which can be seen by multiplying the arc length by a bound of order on the integrand, which we get if the contour stays away from the poles on the imaginary axis, say, by taking to be an odd multiple of . The resulting bound, of order , tends to .

In that way, the contour  can be thought of carving a slice out of the complex plane, and Cauchy’s residue theorem can be used not at the origin but at all remaining poles of the integrand, i.e., the values , . The resulting contribution from each pole is then , so the total is

which simplies to

This is one of the simplest forms of the famous functional equation for the -function.

Now apply 4 to evaluate at positive even integers. We have

and so must only make sense of the term , which we already know how to do since the limiting value of that product is equal to the limiting value of the product modulo the sign . Thus, we get

That’s it! But the reflection formula won’t yield the values of at positive odd integers. [Why?]

4. Euler’s Way

Next, let us imagine how Euler might have implemented the same program: 1) evaluate at the nonpositive integers, 2) establish its functional equation, and 3) use it to evaluate at the even positive integers. While indulging this fantasy, let us agree to use only results and techniques known to Euler around the time he worked on the Basel problem and its generalizations.

4.1. Heart of the matter

Euler was a virtuoso of generating functions. So let us start with the generating function for all nonpositive integer values of . If that task appears daunting, gentle reader, fear not!

Let denote the finite sum , and consider the exponential generating function for the sequence . That means

If tends to , the sum must be replaced by the value of the -function at and the generating function by its limit . An ‘unpleasant’ term appears in the Laurent expansion of this function at the origin, which we will, for now, unceremoniously discard. We get

Comparing coefficients, we get: , which is formula 3.

Now consider the odd version of the above generating function:

On the other hand,

and the latter function has another, magical, representation, due to Euler (who else?) 13:

Replacing by in 5, we now get


This yields a simple version of the functional equation 4, which is enough for our purposes:

which finally implies for all natural numbers , Q.E.D.

4.2. More history

A few comments are in order to defend my contention that the preceding argument could have been obtained by Euler some time between 1730 and 1750. At that time, Euler was well familiar with the generating function for the Bernoulli sequence and could certainly perform all generating function manipulations shown above up to and including the cotangent formula 5.

The timing of the latter formula is a bit tricky, since it was officially published by Euler only in 1748 in 13, which is however still within the time period considered. On the other hand, the cotangent function is simply the logarithmic derivative of the sine function, and Euler mentioned the infinite product for the sine function already in his first solution to the Basel problem in 1734, so it would not be a stretch to say that the formula 5 was known to Euler already in 1734.

As to the functional equation for the -function, Euler arrived at it by 1749. By that time, he long knew the positive even integer values of and also understood how to evaluate at negative integers. By comparing the values of at the positive and negative integers, he obtained, a posteriori, the functional equation for all integer values of , which he then naturally conjectured to hold true for all real values. Euler was also able to verify its validity by taking the limit at  (where has a pole), and (numerically) check it at the half-integer points. Euler’s partial proof of the functional equation for the -function was published in 12 in 1749, 110 years before Riemann’s memoir 16 on the subject. Remarkably, both Euler’s and Riemann’s memoirs were submitted to the Berlin Academy of Sciences!

Detailed discussions of Euler’s 1749 paper were written by G.H. Hardy in 14 and R. Ayoub in 2, with a shorter discussion by A. Weil in 19. Peculiarly, Hardy does not actually comment on the way Euler obtained the negative integer values of in 12 but Ayoub does.

A key observation that led Euler to his understanding of the values of at negative integers is the famous Euler-MacLaurin formula. It was initially claimed by Euler already in 1732/33 8 and proved by him in 1736 in 6. His derivation of it is strikingly modern. Let’s look at it right away.

Given any function and a constant , Euler defines its ‘summatory’ function

then rewrites this relation as and expands into a Taylor series at to obtain:

Using the modern symbol for the differentiation operator , Euler (and we) can rewrite it as

whose invertion yields

which is exactly the generating function of the Bernoulli sequence (in the operator )! This produces the Euler-MacLaurin formula:

4.3. Less mystery

But, but, but…you must be asking yourself, isn’t the ‘derivation’ in Section 4.1 not quite kosher? In fact, wildly nonrigorous? Let’s take a closer look.

The first problematic moment which the perceptive reader no doubt noticed right away is the transition between the generating function that encodes all finite sums and the generating function that is supposed to encode the infinite sums , . And, before we can address this transition, we are faced with the fact that each sum in fact diverges as ! So in what sense can we possibly think of each sum as approaching ? And how, for that matter, did Euler approach those values of zeta?

After all, Euler’s contemporaries were wary or downright scared of divergent series. Even much later, at the time of Hardy’s writing his book Divergent Series 14, that title alone would scandalize other mathematicians, so Hardy could be thought of as ‘trolling’ his colleagues. However, Euler was not easily disturbed by divergence. Many of Euler’s derivations used divergent series with great flair, arriving at correct results. So in what sense did Euler think of ‘summing’ quantities like for ? And what does it have to do with the Euler-MacLaurin summation?

Unsurprisingly, Euler found a brilliant if slightly roundabout way to access the values of at nonpositive integers. Of course, he was perfectly aware that each sum tends to when does. Euler’s inventive solution was to switch to their alternating counterparts instead. The skeptical reader might now ask: why? or rather, what for?

To understand this, let us indulge in a bit of wishful thinking. Imagine that both series and were convergent. Then we would have


relating the value of the alternating series with the value of the nonalternating one. But what of it? Surely, nothing would be gained by this move, as both series diverge. Yes and no.

Each alternating series still diverges in the usual sense, since even the necessary condition for convergence, that the general term must tend to zero as , is not met. However, as Euler knew full well 12, these series do converge in the sense of Abel, in today’s parlance. This means that each generating function

has a finite limit as approaches from below along the real axis. The latter is not hard: indeed, if , the resulting generating function is merely the geometric series and the other functions can be obtained from it by repeatedly using the differential operator :