# Fractional Derivatives: Fourier, Elephants, Memory Effects, Viscoelastic Materials, and Anomalous Diffusions

Pablo Raúl Stinga

Communicated by Notices Associate Editor Daniela De Silva

In the late 17th century, Isaac Newton and Gottfried Wilhelm Leibinz created calculus. The concept of instantaneous rate of change of a quantity with respect to another one, that is, the derivative of a function, made a profound impact in science and the society at large. Differential and integral equations are among the most efficient tools in the mathematical description of many (but not all) natural phenomena. Needless to say, calculus is at the core of a lot of the most advanced technological accomplishments of humankind.

Let us consider a function of a single variable that we will denote by to represent time. Leibniz introduced the notation

for derivatives of order of with respect to , with the understanding that the derivative of order is just the identity operator, .

The beginning of fractional calculus can be traced back to the origins of classical calculus itself. On September 30, 1695, Leibniz sent a letter from Hanover to Marquis de l’Hospital in Paris Lei62. Since Leibniz “had some extra space left to write” (see Lei62, p. 301), he shared with l’Hospital some remarks from the analogy of integer powers of derivatives and integrals . Leibniz introduced the symbol to denote a derivative of fractional order . He did not give a formal definition but believed that one could express a derivative of fractional order with an infinite series. He wrote “There is the appearance that one day we will come to some very useful consequences of these paradoxes,” leaving his seminal thoughts for l’Hospital to think about. It seems that l’Hospital did not pursue the matter any further. Leibniz also shared his thoughts with Wallis in 1697, using again the notation .

We may say that fractional calculus was conceived as a question of extension of meaning: given an object defined for integers, what is its extension to non-integers? In fact, we do not need to restrict to fractions only. We could also ask about the meaning of the derivative of orders or of a function. In this sense, we use the words fractional derivative to refer to derivatives of non-integer order.

A few other initial ideas were introduced by Euler (1738) and Lacroix (1820) for fractional derivatives of power functions and by Laplace (1812) for functions representable by a specific integral. In the 1820s, Joseph Fourier was the first one to give a definition that worked for any function (in Fourier’s view, “there is no function , or part of a function, which cannot be expressed by a trigonometric series,” see Fou88, Par. 418, p. 555). The first known application of fractional calculus appeared shortly afterwards when Abel found an integral of fractional order in his solution to the tautochrone problem. Further contributions by many others flourished during the 19th and 20th centuries. The monumental work SKM93 contains a quite detailed historical overview while a brief history of fractional calculus can be read in Ros75, see also Ros77.

Real-world phenomena exhibiting long-range interactions, memory effects, anomalous diffusions, and avalanche-like behaviors are very well described by fractional derivative models. Furthermore, due to their mathematical features, fractional derivatives are becoming central in image processing. Nowadays, fractional calculus is one of the most vibrant areas in mathematics that continues to expand. In fact, much is still to be understood about fractional derivatives from the theoretical and computational points of view.

In this article, we will go back to the widely overlooked historical definition of fractional derivative given by Fourier. We will then show, with modern ideas, how his definition is in fact the so-called Marchaud–Weyl fractional derivative. After describing some of the latest analytical advances in the theory of fractional calculus, we will present three natural applications to processes with memory effects. The first one is population growth. For the second one, we will go back to the ideas of Boltzmann, combine them with engineering experiments, and come up with a model for viscoelastic materials driven by fractional derivatives. The last application is a fractional model based on the anomalous diffusion behavior observed in many natural systems.

The literature on theory and applications of fractional calculus is massive. A Google Scholar search of articles containing the words “fractional derivative” gives 120,000 results, with 52,500 of them published in the past 20 years. It is not the scope of this article to be exhaustive by any means. Therefore, we will only refer to those works that are directly related with the presentation, leaving many interesting references out.

## Fourier’s Fractional Derivative

In 1822, Joseph Fourier was finally allowed to publish his 1807 original research in the form of a comprehensive monograph entitled “Théorie Analytique de la Chaleur” Fou88. In this work, he introduced the heat equation . To solve it, he created the technique that is nowadays taught to every undergraduate student in mathematics, physics, engineering, and computer science: the method of separation of variables. Fourier studied the development of an arbitrary function in trigonometric (cosines and sines) series and integrals, tools that are now known as Fourier series and Fourier transform, respectively. Towards the end of his monograph, Fourier looks at representations of real single-variable functions as

(Here we have modified Fourier’s notation for the variables to make them consistent with the rest of this article.) The expression above presents diverse analytical applications that Fourier reveals. The derivative of integer order of with respect to can be computed as

Observe that the derivative operator acts only on the cosine in the right hand side. Similarly, one can represent the integral of with respect to with a formula involving the integration in of the cosine function. In Fou88, Par. 422, Fourier points out that, since , we have

Immediately after this equation, he defines fractional derivatives and integrals: “The number , that enters in the second member, will be regarded as any positive or negative quantity. We shall not insist on these applications to the general analysis,” see Fou88, p. 562. Perhaps because of this last comment, some have regarded Fourier’s definition as belonging to the prehistory of fractional calculus, see SKM93, p. xxvii. However, Fourier’s definitions of fractional derivatives and integrals are the first ones given for a general function, not just for power functions as Euler and Lacroix did.

The integral representation of given above is nothing but the Fourier transform inversion formula. Indeed, by applying the trigonometric identity for the cosine of a difference of two angles and Euler’s formula , Fourier’s formula reads

where the Fourier transform of is given by

Obviously, Fourier had a clear understanding of the complex form of his transform (see, for instance, Fou88, Par. 420), but he worked mostly with the cosine formulation. Now, Fourier established that

In other words, differentiation becomes an algebraic operation: derivatives of integer order are just multiplication of the Fourier transform by the homogeneous complex monomial . Following Fourier, the number can now be regarded as any positive or negative quantity. Thus, Fourier’s fractional derivative of of order is defined by

which is the same as saying that

Similarly, the fractional integral of order is

It is obvious from the definitions above that the composition formula , the Fundamental Theorem of Fractional Calculus , and the consistency limits , and, in general, all hold.

Although this seems to work just fine as a good definition of fractional differentiation, we are now faced with many questions. For which functions is well-defined? How do we perform the inverse Fourier transform to actually compute for specific functions ? Does look like a limit of an incremental quotient, similar to the classical derivative? For which classes of does the Fundamental Theorem of Fractional Calculus hold? In which sense are the consistency limits valid? But, even before all of that: what is the meaning of the fractional power for the purely imaginary complex number ? For example, , so which root should we choose? We can begin to answer these questions by introducing a powerful tool: the method of semigroups.

## The Method of Semigroups for Fractional Derivatives

The definitions of fractional derivative and fractional integral given by Fourier can be realized as the positive and negative fractional powers of the derivative operator, respectively. Indeed, we do this just by extension of meaning: in the Fourier side, the derivative is multiplication by , the integer power of the derivative operator corresponds to , so the fractional power of the derivative is given as multiplication by .

The method of semigroups is a very general tool to precisely define, characterize, analyze, and use fractional powers of linear operators in concrete problems. In fact, it can be applied to operators such as the Laplacian Sti19, the heat operator ST17, the Laplace–Beltrami operator in a Riemannian manifold, the discrete Laplacian, the discrete derivative, the wave operator, and many others. It has also been used in numerical and computational implementation of fractional operators with finite differences and finite elements methods. The theory was first established in ST10 for operators on Hilbert spaces, and then extended in GMS13 to operators on Banach spaces. We will not describe the whole theory here nor list all of its applications (to see how it works for the case of the fractional Laplacian and find more references, we refer to Sti19). Instead, we will show how the methodology can unpack Fourier’s definition of fractional derivatives and start answering some of the questions left open at the end of the previous section. The following discussion is based on BMRST16, where the reader can find all the proofs.

From now on, we will just focus on the case . If we want to analyze higher order fractional derivatives, say, of order or then, as we can write and , we only need to study the operators and .

Let us next recall three important properties of the Fourier transform. We have already seen two of them: the Fourier inversion formula in 1 and the relation between the Fourier transform and derivatives of order in 2. The third one is that the Fourier transform of a translation of corresponds to modulation of by a complex exponential. Indeed, if the translation operator acts on as , then . Clearly, we have and . These properties imply that the family of operators is a semigroup, known as the semigroup of left translations. We call it “left” because, for , looks at the values of to the left of . The restriction to will be apparent soon.

The key to the semigroup method for fractional derivatives lies in two important integral identities involving the Gamma function . By using the Cauchy integral theorem and the unique continuation theorem of complex analysis, it is proved in BMRST16, Corollary 2.2 that, for any and ,

In particular, this formula implies that on the left-hand side is chosen from the principal branch of the multi-valued complex function , and this answers the question of which power we should select. If we multiply both sides of this identity by , the left-hand side becomes , while in the integrand on the right-hand side we get , which is the Fourier transform of (note that ). Hence, after inverting back from the Fourier side and making a simple change of variables, we find the pointwise formula

One impressive aspect of the method of semigroups becomes evident: we have found the pointwise formula for without directly computing the inverse Fourier transform of (which should be performed in the sense of tempered distributions because is not a bounded Fourier multiplier).

The fractional derivative 3 involves a sort of fractional incremental quotient in which is compared with through the interaction kernel . The kernel becomes singular when . This gives an idea that must have some regularity at in order to have a well-defined fractional derivative.

Now, to compute we need to know the values of for all . In other words, is a nonlocal operator. Nonlocal also means that if has compact support then, in general, has noncompact support. This is not the case for the computation of the classical derivative in which it is enough to know for values infinitesimally close to . Furthermore, the support of is always contained in the support of . Because of this, we say that classical differential operators are local operators.

Another aspect of 3 is that is one-sided in the sense that we need the values of for , that is, from the past. We should remember that the classical derivative has a hidden two-sided structure. Indeed, in calculus, we first introduce the derivatives from the left and from the right by taking left- and right-sided limits of incremental quotients, respectively. In the very special situation in which both limits coincide, we call that common limit the derivative. The difference between left and right derivatives is very well understood in numerical analysis because of the different properties between the explicit/forward and the implicit/backwards Euler methods.

As a matter of fact, BMRST16 shows that is the fractional power of the left derivative operator . Here is the (negative of the) infinitesimal generator of the left translation semigroup , , namely,

Thus, if is differentiable, then . One can also obtain the fractional derivative that looks into the future by taking the fractional power of , the derivative from the right, which is the infinitesimal generator of the right translation semigroup, see BMRST16.

The pointwise formula for the fractional integral can be found in a similar way, starting with the other important Gamma function integral formula

Parallel as before, it follows that

which is the negative fractional power of the left derivative operator . For these details, see BMRST16.

The left-sided fractional derivative 3, which from now on will be denoted by , is known as the Marchaud–Weyl fractional derivative. André Marchaud found this formula in his 1927 PhD dissertation Mar27. His derivation, though, followed completely different motivations and arguments. Indeed, Marchaud wanted to extend the Riemann–Liouville fractional derivative (which we will not discuss here) to unbounded intervals. Instead, our approach is based on Fourier’s original definition Fou88 and semigroups as in BMRST16. On the other hand, the fractional integral 4, which will be denoted by , is known as the Weyl fractional integral. It was introduced by Hermann Weyl in his 1917 paper Wey17. Weyl used Fourier series and even found the formula 3 for the inverse of . Again, 4 looks at the values of in the past and, like the classical integration operator, is nonlocal.

Since these fractional operators consider the values of in the past, it seems reasonable to use 3 or 4 to account for memory effects. In probability language, non-Markovian processes are those in which the evolution of a system depends not only on the present but also on the past history. As we will see, this intuition amounts for effective models for population growth, for viscoelastic materials response in mechanics and bioengineering, and for anomalous diffusions in physics, fluid mechanics and biology. But before considering these applications, let us outline some elements of the recently developed analytical theory of (left-sided) fractional derivatives and integrals. The theory for right-sided fractional calculus can be established without any problems in an analogous way.

## Theory of Left-sided Fractional Derivatives

It is easy to check that if is, say, bounded at , and Hölder continuous of order at from the left, for some , then is well-defined. More generally, if for with appropriate decay at , then and . Thus, at the scale of Hölder spaces, and behave as differentiation and integration of fractional order , respectively.

Clearly, the fractional derivative of a constant is zero: on . By using the definition of the Beta function, it can be seen that, for any ,

We can also see the influence of the past by modifying the previous function on . Indeed, if we define for and (instead of ) for , then

With a simple change of variables one can also check that, for any ,

From here, we can deduce the fractional derivatives of sine and cosine. In all these examples, when we obtain the left derivative of the functions.

However, some paradoxes arise in the limit as . Obviously, we cannot recover the constant function from the limit . More surprisingly, in the limit as in 5 we get back, but that is not the case of 6, in which for we obtain . This is a consequence of the nonlocality: has a fat tail at , and this influences its fractional derivative at more than the skinny tail of does.

For sufficiently good functions and , one can check that

In view of this relation, we can define the fractional derivative of a distribution as , for suitable test functions . The distributional space must reflect the one-sided nature of fractional derivatives. It was shown in SV20 that the appropriate test functions must be supported on intervals of the form , so as to look at from the left. Since will also have support in , in the action the only values of involved are those to the left.

Notice that, if is smooth, then . In other words, due to the regularity of the test functions , the left derivative coincides with the classical derivative in the distributional and weak senses. Thus, to define left-sided Sobolev spaces, one needs to introduce weighted spaces that are capable of encoding the underlying left-sided structure. This is accomplished by using one-sided Sawyer weights , which are the good weights for the left-sided Hardy–Littlewood maximal function

(It is important to remark that this is the original definition of maximal function given by Hardy and Littlewood in HL30.) Indeed, is bounded on the weighted space , , if and only if , see Saw86. The class is larger than the usual Muckenhoupt class as the example shows. The left-sided weighted Sobolev space for the left derivative is then defined as the set of functions such that , where and . The space is consistent with the left-sided fractional calculus in the sense that if then is well-defined in the sense of distributions. Moreover, if exists in then and the limit is exactly . Conversely, if then in and almost everywhere. It can also be proved that if and