By taking a lot - really, a lot - of X-ray pictures we can in effect even see into objects...
Among the most fascinating applications of mathematics in the modern world are those to the science of imaging in medicine and biological research. Some that I myself have recently experienced first hand are the use of the Doppler effect and ultrasound to visualize the motion of fluids inside the body, and that of optical coherence tomography (OCT) to view sections of the retina from the exterior of the eye. The visual effects are spectacular as well as informative, and as close to wizardry as I hope to come. In a different category is the use of similar techniques to track the brain as it performs various tasks. Among the tasks so far examined are those involved in at least elementary arithmetical computations. One paper on the subject (by Dehaene et al.) opens with this quote from Hadamard's book:
Will it ever happen that mathematicians will know enough of that subject of the physiology of the brain and that neurophysiologists know enough of mathematical discovery for efficient cooperation to be possible?
but so far the promise has not been fulfilled. It is nonetheless curious that in this business, at least potentially, mathematicians are both producers and consumers.
The original example of this sort of technology, and the ancestor of many of these technologies, is what is now called computed tomography, for which Allan Cormack, a physicist whose research became more and more mathematical as time went on, laid down the theoretical foundations around 1960. He shared the 1979 Nobel prize in medicine for his work in this field.
In fact the basic idea of tomography had been discovered for purely theoretical reasons in 1917 by the Austrian mathematician Johann Radon, and it had been rediscovered several times since by others, but Cormack was not to know this until much later than his own independent discovery. The problem he solved is this: Suppose we know all the line integrals through a body of varying density. Can we reconsruct the body itself? The answer, perhaps surprisingly, is that we can, and furthermore we can do so constructively. In practical terms, we know that a single X-ray picture can give only limited information because certain things are obscured by other, heavier things. We might take more X-ray pictures in the hope that we can somehow see behind the obscuring objects, but it is not at all obvious that by taking a lot - really, a lot - of X-ray pictures we can in effect even see into objects, which is what Radon tells us, at least in principle. Making Radon's theorem into a practical tool was not a trivial matter.
There are several different kinds of tomography. The simplest one, which is what I'll look at here, is one in which several beams are shot through an object, in parallel series from several angles. As always with the transmission of electromagnetic waves, a ray is continually attenuated by a certain factor as it passes through a substance. The strength I of the ray at any point at distance s from the source satisfies a differential equation
where F(s) is the attenuation factor, which means that in the plane of the detector the intensity I is the exponential of the line integral of -F(s). Radon's problem is therefore this: Given all the line integrals of a function f(x,y), can we determine f itself?
Explicitly, lines L in the plane can be parametrized by pairs (A, h) where A is an angle and h the oriented height of the line above the origin, as in this picture:
The parametrization of points on the line is also easy to see, as shown in the figure:
Since a line has two possible orientations, the line with parameters (A, h) is the same as that with (A + , -h). For a body with attenuation factor f(x,y) the `X-ray' projection gives us therefore a function g(L) and an associated function g(A, h) with g(A, h) = g(A + , -h) equal to the line integral of f along L. The explicit formula for the projection of f is
If the function f is radially symmetric around the origin, say f(x, y) = F(r), its projection doesn't depend on the projection angle A. In that case g(A, h) = g(h) and we have
For example, the projection of a uniform unit disk gives this picture:
In any case, Radon 's problem is to find a formula for f in terms of g. In the radially symmetric case, this reduces directly to solving Abel's integral equation, and this connection played an important role in the discovery of results.
Radon's inversion formula
Radon's statement of how to recover f from g is this:
Suppose we are given the function g(L). For a point P = (x,y) and r > 0 let FP(r) be the average of the value of g over all lines that are tangent to the circle of radius r centred at P. In particular FP(0) is the average over all lines that pass through P, which is called the back-projection of g.
The formula is invariant under translations, so it suffices to prove it for P equal to the origin. In this case he reduces it to solving Abel's integral equation, at least in favourable cases. But he goes on to give a second, more enlightening derivation. Modern proofs of the formula use the Fourier transform, and none of them seems to me quite as natural as Radon's original.
There is a problem even in making sense of Radon's formula, because of the singularity 1/r near 0. To understand better what's going on, the modern version of the statement, if not of the proof, is better in both theory and practice. Starting with g(L), start with the point P; for each angle A calculate R = R(A) to be the projection of P perpendicular to the direction of A; define an auxiliary function H and then continue:
The equivalence of this with Radon's formula is essentially a matter of interchanging the order of integration. We still have the singularity, but the integral can now be defined as a principal value. If g is smooth enough, the positive and negative terms in the integral on either side cancel out. This idea manifests itself even more concretely, since under the assumption that functions are of compact support we can write very generally
This makes perfect sense since ln|x| is integrable near 0. It even turns out to be practical.
Mathematically, the process looks fairly simple, but making the mathematics practical is not all that simple. First of all the projection can be obtained for only a finite number of values of A and h. This causes intrinsic problems, since in reconstructing values at points (x,y) we must have access to values for other h. The standard method used in the field is one called Filtered Back Projection (FBP), and it uses linear interpolation for this purpose. If we are working with grids of linear size N, the process takes time proportional to O(N3), which is reasonably good but not spectacular. It is better than might be expected, however, because due to noise very large N are probably not feasible. Recently, practical methods of time O(N2 log N) have been proposed (for example in the paper by Brandt et al.). The F in FBP is not conceptually simple, and I was therefore pleased to see that recently a more direct method of inverse computation has been proposed first in the paper by la Rivière, and again in a paper by Fokas, Iserles, and Marinakis who incorporate this in working with a more complicated kind of tomography. It is not fast (at least not yet), being of order O(N4) in the straightforward implementation. But since it is direct, one might hope that errors are easier to understand.
In this direct calculation for each angle Ai one calculates the cubic spline in h for the function g(Ai,h), and then calculates the principal values directly for these splines. Incidentally, the papers I mention above claim to use a formula for the inverse calculation of the derivative of principal value integrals substantially more complicated than the one I mention above.
What can go wrong ...
This procedure is conceptually simple, but does it work? Programming the direct method turned out to be simpler than that of other methods. Of course I ran some simple tests to check my work. Below, I take a solid disk of radius 3/4, i.e. mathematically the characteristic function of that disk, and first calculate the projection, then the inverse projection, and then see what there is to see. Here are the reconstructed images I got with N = 32, 64, and 128:
Something peculiar is going on in the backgrounds. Now, this is not quite a straightforward record of the data I computed. I anticipated a few negative values in the reconstructed function, which of course have no physical meaning, so in drawing the picture I assigned them - somewhat arbitrarily - to be 0. This is of course the color of the background. Since the background doesn't seem to be very uniform, it looked like a good idea to investigate the negative values more carefully. First I did a frequency analysis of all the pixel values I calculated. Here is what the graph looks like for N=128:
What we get appears to be a great deal of variability clustered around 0, which I hadn't expected at all, as well as a reasonably accurate representation of the values around 1. These values clustered around 0 appear to be spread in lines, so it seems that we are looking at an artefact of the discretization of theoretically continous data. I don't know a thorough explanation of what's going on, but you can see with a little thought that some problem is likely. The inverse transform, like the original, has to be of compact support, but in the computation the sweep through angles A involves accessing non-zero values. There has to be some kind of high-powered cancellation going on, which suggests rapid spatial oscillations. The discretization doesn't handle this well.
There are several standard models for testing out ways to analyze X-ray data. One is an artificial model of a head with some tumours in it, first proposed in a classic paper by Logan and Shepp. The model itself is a collection of ellipses in an elliptical skull of finite and variable thickness:
The light blue areas represent slices of tumors, the oblong regions the ventricles, the rest gray matter. Here is what my program does with this for N = 32, 64, 128:
For these images, as for many medical problems, the main problem is to separate the regions whose densities are spread closely around that of water. It was therefore quite reasonable to manipulate the data automatically to enhance the distinctions between these densities. In the literature there is much discussion about the thin artificial band just inside the skull. Aside from this band, the results seem satisfactory, especially considering how unsophisticated this has been.
The literature is vast. The topic is one involving life and death, and among other rewards for clever work are fame and fortune. The following are items I have found interesting and useful. One of the themes followed here is not only that of mathematics as a producer of images, but of mathematics as a process in the brain that can be tracked with imaging, in this case with emission tomography and functional magnetic resonance imaging (fMRI)
NOTE: Those who can access JSTOR can find some of the papers mentioned above there. For those with access, the American Mathematical Society's MathSciNet can be used to get additional bibliographic information and reviews of some these materials. Some of the items above can be accessed via the ACM Portal, which also provides bibliographic services.
Welcome to the
These web essays are designed for those who have already discovered the joys of mathematics as well as for those who may be uncomfortable with mathematics.
Search Feature Column
Feature Column at a glance
Comments: Email Webmaster