# Circles and Squares ... and Primes

In this column I am going to try to give you some idea of how this [finding patterns in the primes] works by looking at a few relatively simple examples. ...
Bill Casselman

University of British Columbia, Vancouver, Canada

Email Bill Casselman

### Introduction

A **prime number** is a positive integer $p$ that is divisible only by $\pm p$ and $\pm 1$. Now multiplication and addition interact rather strangely, in almost unfathomable ways. As a consequence, the distribution of prime numbers among the integers is somewhat random, and mathematicians have tried to find patterns in this distribution for a very long time. The starting point is a fact that the Greeks of classical times already knew: there are an infinite number of primes. Euclid (c. 300 BCE!) includes a proof of this (it is Proposition 20) in Book IX of the **Elements**.

Even now, there remain many puzzles concerning primes about whose solutions we seem to know next to nothing.

Surprisingly, even though prime numbers are very discrete objects, some of the major tools in investigating them have been certain continuous functions, and in particular the zeta function $\zeta(s)$ first introduced around 1750 by the Swiss mathematician Leonhard Euler. It is probably the most famous function in all of mathematics, at least among mathematicians, but undoubtedly more mysterious than not to the rest of the world.

The zeta function is just one of several similar functions, all of which have some application to problems in number theory. The point is that finding interesting patterns in the small-scale distribution of prime numbers is very difficult, perhaps impossible, but that patterns do appear when one looks at large collections of them, just as irregularities in any structure disappear when perceived from a great distance.

In this column I am going to try to give you some idea of how this works by looking at a few relatively simple examples. The full theory of $\zeta(s)$ involves some extremely difficult mathematics, but it is possible to give some idea of what is going on just through elementary calculus.

I'll look first at some problems about the distribution of the usual prime numbers $2$, $3$, $5$, $7$ ... , but in the second part of this column I'll look briefly at a less familiar problem concerning prime "numbers" among the **Gaussian integers**.

### Dirichlet's theorem

Except for $2$, all prime numbers are odd. The first few among these are $$ 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47, 53, 59, 61, 67, 71, 73, 79, 83, 89, 97, \dots $$ If we divide these by $4$, we get remainders $$ 3, 1, 3, 3, 1, 1, 3, 3, 1, 3, 1, 1, 3, 3, 1, 3, 1, 3, 3, 1, 3, 3, 1, 1, \dots $$ Now if $p$ is an odd prime, its remainder upon division by $4$ has to be either $1$ or $3$, and from the data it certainly looks at least plausible that there are roughly as many primes leaving remainder $1$ as those leaving remainder $3$, at least in any finite stretch. A bit more calculation will provide more evidence. Let $\pi_{1}(n)$ be the number of primes $\le n$ with remainder $1$, $\pi_{3}(n)$ those with remainder $3$. The following graph shows the values of these functions for $n$ in the range $[0,1000]$. The primes leaving remainder $3$ show a slight edge, but the ratio of $\pi_{3}/\pi_{1}$ is not significantly different from $1$.

It is in fact true, and provable, that the ratio $\pi_{3}(n)/\pi_{1}(n)$ is asymptotically equal to $1$. I'll not prove that, but I shall explain at least why at least both $\pi_{1}(n)$ and $\pi_{3}(n)$ have infinite limit as $n \rightarrow \infty$, or in other words that there are an infinite number of primes $p$ that leave remainder $1$, and an infinite number that leave remainder $3$.

**Euler's zeta function**

The reasoning starts with Euler. He defined the function $$ \zeta(s) = 1 + {1 \over 2^{s} } + { 1 \over 3^{s} } + { 1 \over 4^{s} } + \cdots $$

**Convergence**
Just as with all infinite series, one has to begin by asking, *for what values of $s$ does this series make sense?* The terms are all positive, so we only need to know if the sum is bounded. The answer comes from the integral test often encountered in first year calculus courses. We can visualize the series by plotting $1/x^{s}$ for some sample values of $s$, in the range $x=1$ to $\infty$. Here, for example, is what we get for $s=2$:

Euler's sum is the area inside the boxes. Since $1/x^{s}$ is a decreasing function of $x$ for any $s > 0$, we can see that $$ \int_{1}^{n+1} { dx \over x^{s} } < 1 + {1 \over 2^{s} } + { 1 \over 3^{s} } + { 1 \over 4^{s} } + \cdots + {1 \over n^{s} } < 1 + \int_{1}^{n} { dx \over x^{s} } $$ We also have the integration formula $$ \int_{1}^{n} { dx \over x^{s} } = \cases { { 1 \over s-1 } - { n^{1-s} \over s - 1 } & $s > 1$ \cr \phantom{\Big|\kern-2pt } \log n & $s=1$ \cr } $$ for all $s \ge 1$. We deduce a number of things. First of all, $$ 1 + {1 \over 2^{s} } + { 1 \over 3^{s} } + { 1 \over 4^{s} } + \cdots + {1 \over n^{s} } < 1 + { 1 \over s - 1 }$$ as long as $s > 1$, so that the series converges. Second, letting $n \rightarrow \infty$ in the first inequality, we see that $$ { 1 \over s - 1 } < \zeta(s) < 1 + { 1 \over s-1 } \, , $$ Third, for $s=1$ the sum does not converge, since it is sandwiched between two items that become infinite. We can also see this because $$ 1 + {1 \over 2 } + { 1 \over 3 } + { 1 \over 4 } + \cdots + { 1 \over n } $$ is approximately $\log n$. But we can also understand a bit more precisely what happens as $s \rightarrow 1$---it grows off to $\infty$ as $s \rightarrow 1$, but about as badly as $1/(s-1)$.

**Euler's product formula**

Every positive integer $n$ can be factored uniquely into a product of powers of prime numbers. Euler observed that we can therefore write $\zeta(s)$ as an infinite product over prime numbers $p$: $$ \zeta(s) = 1 + 2^{-s} + 3^{-s} + (2 \kern-1pt \cdot \kern-1pt 2)^{-s} + 5^{-s} + (2 \kern-1pt \cdot \kern-1pt 3)^{-s} + 7^{-s} + (2 \kern-1pt \cdot \kern-1pt 2 \kern-1pt \cdot \kern-1pt 2)^{-s} + \cdots = \prod_{p} \>\> \Big( 1 + p^{-s} + p^{-2s} + \cdots \; \Big) = \prod_{p} \> \left( { 1 \over 1 - p^{-s} } \right ) $$ And from this, together with the remarks about convergence, he found a new proof of the theorem due to Euclid that there are an infinite number of primes. To see this, take the log of the product expression for $\zeta(s)$. Since $$ \log (1 - x) = \int_{0}^{x} { dt \over 1 - t } = \int_{0}^{x} (1 + t + t^{2} + \cdots ) \, dt = x + {x^{2} \over 2 } +{ x^{3} \over 3 } + { x^{4} \over 4 } + \cdots $$ we deduce that $$ \log \zeta(s) = - \sum_{p} \, \log (1 - p^{-s}) = \sum_{p} \> p^{-s} + \sum_{p} \> \left( { p^{-2s} \over 2 } + { p^{-3s} \over 2 } + \cdots \; \right ) $$ But the second sum in each prime term converges for $s > 1/2$. Letting $s$ tend to $1$, using what we know about the limit of $\zeta(s)$ as $s \rightarrow 1$, we deduce that the sum $$ \sum_{p} \> { 1 \over p } $$ is infinite.

In a moment, we'll need also the modified $\zeta$ function $$ \zeta_{*} = 1 + { 1 \over 3^{s} } + { 1 \over 5^{s} } + \cdots = \prod_{p \ne 2} \> \left( { 1 \over 1 - p^{-s} } \right ) $$ which also become infinite at $s=1$.

**Dirichlet's L-function**

It was Dirichlet (c. 1830) who applied the same reasoning to deduce that there are an infinite number of primes in any arithmetic progression not automatically ruled out (such as the even integers). For example:

**Theorem. (Dirichlet) ** * There are an infinite number of primes in both the arithmetic progressions $$ 1, 5, 9, 13, 17, 21, 25, 29, 33, \dots \quad \hbox{ and } \quad 3, 7, 11, 15, 19, 23, 27, 31, 35, \dots $$ * Dirichlet's proof is a variant of Euler's. He defines $$ L(s) = 1 - { 1 \over 3^{s} } + { 1 \over 5^{s} } - { 1 \over 7^{s} } + \cdots = \sum_{n=1}^{\infty} { \chi(n) \over n^{s} } $$ with $$ \chi(n) = \cases { \phantom{-}1 & if $n \equiv 1 \; (4)$ \cr -1 & if $n \equiv 3 \; (4)$ \cr \phantom{-}0 & otherwise. \cr } $$ It happens that $\chi(mn) = \chi(m)\chi(n)$, so that this also may be expressed as an Euler product $$ L(s) = \prod_{p} \> \left( { 1 \over 1 - \chi(p) \, p^{-s} } \right ) $$ where now $\chi(p) = 1$ if $p \equiv 1 \, {\rm mod} \, 4$, $-1$ if $p \equiv 3 \, {\rm mod} \, 4$. What is different about the function $L(s)$ is that the series $$ 1 - { 1 \over 3^{s}} + { 1 \over 5^{s} } - { 1 \over 7^{s} } + \dots $$ does converge (conditionally) at $s=1$, since it is a sum of terms that decrease in magnitude and whose signs alternate. In fact, a familiar integral formula $$ \int_{0}^{x} { dt \over 1 + t^{2} } = \arctan(x) $$ discovered by Leibniz (c. 1790) implies upon setting $x=1$ that $$ {\pi \over 4 } = 1 - { 1 \over 3 } +{ 1 \over 5 } - { 1 \over 7} + \cdots $$ If we continue to follow Euler, we find that for $s > 1/2$ we have $$ \log L(s) = \sum_{p} \, { \chi(p) \over p^{s} } + \hbox{ something absolutely convergent } \, . $$ Taking the sum of this with $$ \log \zeta_{*}(s) = \sum_{p \ne 2} \, { 1 \over p^{s} } + \hbox{ something absolutely convergent } $$ we see that $$ {\sum}_{\> p \equiv 1 \, (4) } \;\; { 1 \over p } $$ is infinite, and if we take the difference we see that $$ {\sum}_{\> p \equiv 3 \, (4)} \;\; { 1 \over p } $$ is infinite. Here I follow notation introduced in the early nineteenth century by Gauss, and write $p \equiv q \, (4)$ to mean that $p$ and $q$ leave the same remainder if we divide them by $4$.

As Dirichlet himself remarked, one has to be a bit careful about the way in which conditionally convergent series are manipulated, but it is not hard to be suitably careful here. (Dirichlet was one of the very first mathematicians to be so careful about convergence, and it is not unreasonable to claim he was one of the founders of modern mathematical style, along with Galois and Cauchy.)

#### Other arithmetic sequences

The example above is a little too simple to give you an idea of what happens in general, so I'll briefly look at a more complicated one. Consider the arithmetic sequence of integers leaving a remainder of $1$ when divided by $8$. Again, it turns out that it contains an infinite number of prime numbers. The reasoning is almost exactly the same, except that we have to look at more $L$ functions. The following table defines functions on the positive integers: $$ \matrix { n \> {\rm mod} \> 8 & 0 & 1 & 2 & \phantom{-}3 & 4 & \phantom{-}5 & 6 & \phantom{-}7 \cr \chi_{0}(n) & 0 & 1 & 0 & \phantom{-}1 & 0 & \phantom{-}1 & 0 & \phantom{-}1 \cr \chi_{1}(n) & 0 & 1 & 0 & -1 & 0 & -1 & 0 & \phantom{-}1 \cr \chi_{2}(n) & 0 & 1 & 0 & -1 & 0 & \phantom{-}1 & 0 & -1 \cr \chi_{3}(n) & 0 & 1 & 0 & \phantom{-}1 & 0 & -1 & 0 & -1 \cr } $$ Note that $$ \sum_{k} \chi_{k}(n) = \cases { 4 & if $n \equiv 1 \; (8)$ \cr 0 & otherwise. } $$ Define now $L$ functions for each of these: $$ L(s, \chi) = \sum_{n} { \chi(n) \over n^{s} } $$ The function $L(s, \chi_{0})$ is the same as what I called $\zeta_{*}(s)$ earlier. It turns out that all of these have Euler products and that all of the $L(1, \chi_{k})$ for $k \ne 0$ are finite and non-zero. We can deduce as before that there are an infinite number of primes $p \equiv 1 \; (8)$.

*Comments*?

### Gaussian integers

Dirichlet's Theorem is well known, and it is easy to find a very large number of places where its most general form, valid for a much larger class of arithmetic sequences, is explained. But I shall not say more about it. Instead, I want to discuss a different kind of prime number, and a different kind of distribution.

**Gaussian arithmetic**

The **Gaussian integers** are the complex numbers of the form $a + bi$ with $a$, $b$ both integers (here $i = \sqrt{-1}$). There are many new phenomena that appear when working with these. Most appealing is the prospect of interpreting them geometrically. They may be considered as a lattice of points in the complex number plane.

The addition and multiplication of Gaussian integers may be interpreted geometrically. Any complex number may be thought of as a vector in the plane, and adding complex numbers is just like adding vectors. The magnitude of $x + iy$ is $|x+iy| = \sqrt{x^2 + y^2}$. If $z = x + iy\,$ is any complex number, we may write it in polar coordinates as $z = r e^{i \theta} = r (\cos \theta + i \sin \theta)$ with $r = |z|$. But then if $a + i b = r e^{i\alpha}$ and $c + id = \rho e^{i\beta}$ $$ (a + ib)(c + id) = r \rho \cdot e^{i(\alpha + \beta)} $$ That is to say, *the magnitude of the product is the product of magnitudes*, and *the angle of the product is the sum of angles*. For example, multiplication by $i$ amounts to rotation anti-clockwise by $\pi/2$ radians (or $90^{\circ}$), since $|i|=1$ and its angle is $\pi/2$.

I'll need some useful notation later on. I'll write ${\Bbb Z}$ for the usual integers $0, \pm 1, \pm 2, \dots\ $, and ${\Bbb Z}[i]$ for the Gaussian integers.

Everyone is familiar with the division algorithm for ${\Bbb Z}$. It says that *if $n$, $m$ are two integers then one can find integers $q$ and $r$ such that $$ n =qm + r \quad \hbox{ with } \quad 0 \le r < m \, . $$ *

This might not be exactly what you expect, since it says that $-13 = (-4)4 + 3\,$ rather than $-13 = (-3)4 - 1$. It has the great virtue that the remainder is always non-negative. (Most programming languages implement this *integer division* in some form, and Python does it by default.)

There exists also a division algorithm for Gaussian algorithm for ${\Bbb Z}[i]$, although it is a bit weaker than what one knows for ${\Bbb Z}$.

**Theorem**. * If $n$ and $m$ are two Gaussian integers, there exist Gaussian integers $q$ and $r$ such that $$ n =qm + r \quad \hbox{ with } \quad |r| \le (\sqrt{2}/2) \cdot |m| \, . $$ *

To see why this is true, first of all let $n/m$ be the quotient as complex numbers. I recall that $$ { a + ib \over c + id } = { (a+ib)(c-id) \over c^2 + d^2 } \, . $$ Then, as the following picture shows, we can write $n/m = q + x$ where $q$ is a Gaussian integer and $|x| \le (\sqrt{2}/2)$. Multiply this equation by $m$.

It is not hard to write this recipe out explicitly, although it does require a little care. In any case, one may now apply the usual Euclidean algorithm to calculate greatest common divisors. Suppose $n \hbox{ mod } m$ to be the remainder in the division algorithm for ${\Bbb Z}[i]$. To find the gcd of two Gaussian integers:

gcd(n, m):
if m == 0:
return n
r = n mod m
while r != 0:
n = m
m = r
r = n mod m
return m

**Gaussian primes**

By definition, a **Gaussian prime** is a Gaussian integer $p$ that may be divided only by the Gaussian integers of the form $u$ or $u\cdot p$, where $u$ is a power of $i$.

As a consequence of the division algorithm, a version of unique factorization is valid for ${\Bbb Z}[i]$. One thing that is new here is that to make factorization unique, one needs to decide what a positive complex number is. What I'll specify is that it must be in the positive quadrant: $x + iy$ will be positive if $x > 0$ and $y \ge 0$.

**Theorem**. *Every Gaussian integer may be written uniquely as $u \cdot n$ with $u = i^{k}$ and $n$ equal to a product of positive prime Gaussian integers.*

Let's look at some examples of Gaussian primes. First, some remarks. For any Gaussian integer let $\Vert n \Vert = |n|^{2}$. If $n = a + ib$ the $\Vert n \Vert = a^2 + b^2$, so $\Vert n \Vert$ will always be a positive integer in ${\Bbb Z}$. This implies immediately one simple criterion: if $\alpha$ is a Gaussian integer with $\Vert \alpha \Vert$ equal to a prime in ${\Bbb Z}$, then $\alpha$ is a prime Gaussian integer. For example, $1+i$ is a prime Gaussian integer, since $\Vert 1 + i \Vert = 2$. One can also see directly from the lattice picture that if $\Vert \alpha \Vert = 1$, then $\alpha = i^{k}$.

The same reasoning works for $5$. We can see that $5 = 4 + 1 = (2 + i)(2 -i)$, so it is not prime, but its factors $2 \pm i$ are prime. This works in fact for any prime $p \equiv 1 \; (4)$ in ${\Bbb Z}$, since in this case we can always find $a$ and $b$ such that $a^2 +b^2 = (a+bi)(a - bi) = p $.

What about $3$? If $3 = a.b$, then $9 = N(a)N(b)$, so there are three cases: $N(a) = 3$, $N(b) = 3$; $N(a) = 1$, $N(b) = 9$; $N(a) = 9$, $N(b) = 1$. In the last two cases, one of $a$ or $b$ is a power of $i$, and the other must just be a unit times $3$. In the first case, if $a = m + in$ then $3 = m^2 + n^2$, and we must have $|m|$, $|n| \le 1$, leading to a contradiction. Therefore $3$ is a prime Gaussian integer. The same reasoning shows that any prime $p \equiv 3 \; (4)$ remains a prime in ${\Bbb Z}[i]$.

**Theorem**. The positive prime Gaussian integers are of the form (a) $1 + i$; (b) a prime integer $p \equiv 3 (4)$; (c) $m + in$, $n + im$ with $m^2 + n^2 = p$ for $p \equiv 1 (4)$.} It is not hard to make a list of these from a list of the usual primes. The only hard part is to find $m$, $n$ such that $m^2 + n^2 = p \,$ when $p \equiv 1 (4)$. In this case, first find $k$ such that $k^{2} \equiv -1 (p)$, and then let $m + in$ be the gcd of $k + i$ and $p$. Here is a picture of some small Gaussian primes (not just the positive ones):

**The angular distribution of Gaussian primes**

And here is a picture of a lot of Gaussian primes:

The potential chaos in the distribution of Gaussian primes is very evident, even more evident than graphics can demonstrate for the primes in ${\Bbb Z}$. One analogue of Dirichlet's Theorem, at least, seems very natural:

**Theorem**. * In any angular sector there exist an infinite number of Gaussian primes.*

In fact, the Gaussian primes are uniformly distributed in angle.

Although Gaussian integers and primes were invented (discovered?) by Gauss around 1820, this theorem seems to have been first found by the German mathematician Erich Hecke (c. 1925). The proof of the theorem follows along the same lines that we saw in the discussion of Dirichlet's Theorem. First define the following analogue of Euler's function: $$ \zeta_{i}(s) = \sum_{z > 0} { 1 \over \Vert z \Vert^{s} } = \prod_{p} \> \left( { 1 \over 1 - \Vert p \Vert^{-s} } \right ) \, . $$ A two-dimensional integral comparison will show that $$ \zeta_{i}(s) \sim { \pi \over s - 1} $$ as $s \rightarrow 1$.

What replaces Dirichlet's $L(s)$? We need an infinite family of them. For each $n$ in ${\Bbb Z}$ and $z$ in ${\Bbb Z}[i]$ define $$ \chi_{n}(z) = \left({ z \over |z| }\right)^{n} $$ and then $$ L(s, n) = \sum_{z > 0} { \chi(z) \over \Vert z \Vert^{s} } \, . $$ If $n \equiv 0 \; (4) $ then $\chi_{n}(u\cdot z) = \chi_{n}(z) \,$ for any power $u$ of $i$, and one can show that $L(s,n)$ has an Euler product $$ L(s, n) = \prod_{p > 0} \> { 1 \over 1 - \chi(p)/\Vert p \Vert^{s} } \, . $$ The point here is that the product of two positive numbers in ${\Bbb Z}[i]$ might not be positive. In any case, it is not so easy to show, but it is true, that $L(s,n)$ remains finite at $s=1$ and is not $0$. This means that $\log L(s,n)$ remains finite at $s=1$.

Now for Dirichlet's Theorem we just added $\zeta(s)$ and $L(s)$ to deduce that there are an infinite number of primes $\equiv 1$ and $\equiv 3$. Here, we need an infinite sum! What sum? Suppose $S$ to be an angular sector in the positive quadrant, say between angles $A$ and $B$. One can find a positive function $f$ with period $2 \pi$ that vanishes outside of angles in the interval $[A, B]$, and then express it in terms of a Fourier series $$ f(\theta) = \sum_{n \in {\Bbb Z}} f_n \, e^{i n \theta} \, . $$ One concludes by looking at the corresponding sum $$ \sum_{n} f_{n} \, \log L(s, n) \, , $$ which will become infinite as $s \rightarrow 1$ We saw something similar in the earlier examples of Dirichlet's Theorem. This tells us that there are an infinite number of Gaussian primes $\pi = |\pi| e^{i \theta}$ such that $f(\theta) \ne 0$, hence in the given angular sector.

### Final remarks

The functions $\zeta(s)$, $L(s)$, etc. are collectively known as $L$-functions. There are lots of others that are somewhat similar. Their most important property is the expression as an Euler product, which guarantees that one can deduce properties of prime numbers of some kind or another from analytic properties of the functions.

Riemann (c. 1850) pointed out that $\zeta(s)$ can be defined for $s$ a complex number, and that extremely strong estimates of the distribution of prime numbers can be derived from conjectural properties of its complex zeros. This conjecture has become the most famous unproven conjecture in all of mathematics.

Bill Casselman

University of British Columbia, Vancouver, Canada

Email Bill Casselman