DINI DERIVATIVES AND REGULARITY FOR EXCHANGEABLE INCREMENT PROCESSES

. Let X be an exchangeable increment (EI) process whose sample paths are of inﬁnite variation. We prove that for any ﬁxed t almost surely, limsup h Ñ 0 ˘ p X t ` h ´ X t q{ h “ 8 and limsup h Ñ 0 ˘ p X t ` h ´ X t q{ h “ ´8 . This extends a celebrated result of Rogozin for L´evy processes obtained in 1968 and completes the known picture for ﬁnite-variation EI processes. Applications are numerous. For example, we deduce that both half-lines p´8 , 0 q and p 0 , 8q are visited immediately for inﬁnite variation EI processes (called upward and downward regularity). We also generalize the zero-one law of Mil- lar for L´evy processes by showing continuity of X when it reaches its minimum in the inﬁnite variation EI case; an analogous result for all EI processes links right and left continuity at the minimum with upward and downward regularity. We also consider results of Durrett, Iglehart, and Miller on the weak convergence of conditioned Brownian bridges to the normalized Brownian excursion considered in [DIM77] and broadened to a subclass of L´evy processes and EI processes by Chaumont and the second author. We prove it here for all inﬁnite variation EI processes. We furthermore obtain a description of the convex minorant known for L´evy processes found in [Ann. Prob. 40 (2012), pp. 1636–1674] and extend it to non-piecewise linear EI processes. Our main tool to study the Dini derivatives is a change of measure for EI processes which extends the Esscher transform for L´evy processes.

DINI DERIVATIVES AND REGULARITY FOR EI PROCESSES 25 considering pX t´t X 1 , t ď 1q also yields non-Lévy EI processes, so that our results can be applied more broadly.
Also, the analysis of EI processes is sometimes aided by simple combinatorial considerations. Indeed, for random walks, the combinatorial considerations of [Spi56] lead to a more thorough understanding of the fluctuation theory (study of extremes) of random walks and Lévy processes and in particular of the celebrated arcsine law for symmetric random walks and Lévy processes. It also reobtains the following formula of [Kac54]: (1) Eˆmax 0ďkďn X k{n˙" More recently, [AP11] introduced a bijection on permutations which ultimately lead to a description of the convex minorant of a (discrete time) EI process and reinterprets the fluctuation theory of random walks. The Kac-Spitzer identity just displayed is interpreted as the equality in law where 0 " S 0 ă S 1 ă¨¨¨ă S K n " n is the partition obtained from a uniform stick-breaking process on t1, . . . , nu independent of X. The link with the typical fluctuation theory (of random walks and Lévy processes) comes from considering a random n independent of X and geometrically distributed. The partition is then seen to arise from a Poisson point process and the right hand side becomes a compound Poisson distribution in the random walk or Lévy process case; cf. Theorem 4 in [AP11]. The description of the convex minorant for discrete time EI processes is used here to prove an analogous theorem for continuous time EI processes. The multidimensional case is much less studied, but the combinatorial lemma of [BNB63], from which one obtains the expected characteristics of the convex hull of (2D) random walks (like perimeter length or area) has been extended in various directions (and dimensions!) including [RFW17,KVZ17a,KVZ17b,VZ18]. Still in the realm of fluctuation theory, [Ber93] constructs (one-dimensional) random walks conditioned to stay positive through a bijection on permutations; this result is used here to study continuity of an EI process when it reaches its minimum. Away from random walks, (discrete time) EI processes of a particular type are associated to trees (with a given degree distribution) in [AB12] and [BM14], and combinatorial considerations give information on this probabilistic model. To obtain the trees from their coding EI process, one needs access to the (discrete) local times, as discussed in Proposition 1.2 of [LG05]. A discussion of local times in continuous time must include consideration of regularity, which was the main motivation of this paper. This discussion would probably enable one to construct continuum random trees associated to EI processes directly (by defining a local time functional called the height process) instead of through a line-breaking construction as in [AMP04] or through an associated Lévy process as in [BDW18]. Kallenberg obtained in [Kal73] the following representation of EI processes X: there exist random variables α, β " pβ i , i P Nq and σ ě 0 which are independent of an iid sequence of uniform random variables pU i , i ě 1q and of a Brownian bridge b such that X t " αt`σb t`ÿ iě1 β i r1 U i ďt´t s .
When α, β, and σ are deterministic, the EI process X is termed extremal. All EI processes are therefore mixtures of extremal EI processes, and we say that X has canonical parameters pα, σ, βq.
Remark. Our results are stated for extremal processes. They can be generalized by conditioning on the parameters on the set where these satisfy the given hypotheses.
The sample paths of an extremal EI process X are of infinite variation if and only if Infinite variation: Either σ ą 0 or ř |β i | " 8. Our first result is the following: Theorem 1. Let X be an extremal EI process of infinite variation. Then, for any fixed t almost surely, both from the left and from the right.
Reversibility for EI processes (the fact that pX 1´Xp1´tq´, t ď 1q has the same law as X) implies that it is enough to handle the above theorem for the right-hand derivatives. By exchangeability, it is enough to consider t " 0. We define In contrast, for finite variation EI processes X which satisfy σ " 0 and ř |β i | ă 8, we can write them as X t "αt`ř i β i 1 U i ďt , whereα " α´ř i β i . Finite variation EI processes therefore are characterized by the parameters pα, βq. It is well known that DpXq " DpXq "α almost surely (cf. [Kal05,Cor. 3.30]).
Theorem 1 was proved for Lévy processes in [Rog68] by using an integro-differential equation initially found by Cramér and later recognized and analyzed as a resolvent equation by Watanabe in [Wat71]. Additional proofs, based on the fluctuation theory for Lévy processes, may be found in [Sat99,Ch. 9, §47] and [Vig02]. Bertoin proved the limsup statement in the spectrally positive EI case when (β i ě 0 for all i) in [Ber02] based on the results of Fristedt from [Fri72]. Kallenberg takes results further by considering upper envelopes of EI processes in [Kal05] by clever couplings with Lévy processes. These results are nevertheless insufficient to obtain Thereom 1. A particular case of the above result is found in [CUB15,Prop. 3.5] under an additional hypothesis on β. Additionally, the same proposition proves Theorem 1 whenever σ ‰ 0 using the law of the iterated logarithm for Brownian motion. Hence, we could assume that σ " 0 in our proofs, but the method is robust enough to handle it. Actually, in the Lévy process setting, our method can also handle general Lévy processes and gives an independent proof of Rogozin's result. This is done in Section 2, while Theorem 1 is proved in Section 3.
Our next application is to show that the zero-one laws for Lévy processes of Millar are actually valid and therefore a consequence of the following result (cf. items (a) and (b) of [Mil77,Thm. 3.1]), which links behavior when reaching the minimum with behavior at time zero.
Definition. An EI process X is said to be upward regular if inf tt P r0, 1s : X t ą 0u " 0 almost surely. X is downward regular if´X is upward regular.
Knight has given in [Kni96] the following necessary and sufficient conditions for X to admit a unique minimum in the extreme setting: Let X be an extremal EI process satisfying UM. Let X 1 " inf sPr0,1s X s and let ρ be the unique element of tt P r0, 1s : X t^Xt´" X 1 u. Then X ρ ą X 1 if and only if X is irregular upward and X ρ´ą X 1 if and only if X is irregular downward. In particular, X is continuous at ρ if and only if X is both upward and downward regular and this holds on the set where X has paths of infinite variation.
Millar actually proves the zero-one law at the minimum in the Lévy process setting for more general random intervals in lieu of r0, 1s and refers to this as the pure behavior of Lévy processes, while noting that it is rather exceptional in the class of Markov processes. Millar also remarks that it is this zero-one law which implies that the conditional law of X ρ`¨g iven X¨^ρ depends only on X 1 and X ρ . The extension to general random times follows quite easily with Millar's arguments from the stated result above. When X is a Lévy process of finite-variation, necessary and sufficient conditions for regularity have been found by Bertoin in [Ber97] in terms of the Lévy measure. We believe that a similar characterization should be available for EI processes in terms of β. This is left as an open problem.
Regularity of half-lines for a Lévy process has many other applications: it helps in obtaining perfectness of the zero set and in constructing a continuous (Markovian) local time (Theorem 6.6 of [Kyp14]); it implies uniqueness for solutions of time-change equations used to construct multitype branching processes (Lemma 6 in [CPGUB17]); it proves continuity of the Vervaat transform for cyclically exchangeable processes ( [CUB15]); and regularity of p´8, 0q has been used when pricing perpetual American put options, as a condition for smooth pasting (see the discussion in Section 1.4.4 in [KL05]).
Our second application concerns the weak limit of an EI process X ending at zero (in other words, with α " 0), conditioned on remaining above´ε as ε Ñ 0. The limiting process is called the Vervaat transform of X and is defined as Theorem 3. Let X be an EI process with α " 0 which is both upward and downward regular. Consider ε ą 0 and let X ε have the law of X conditionally on X 1 ą´ε.
Note that the above theorem always applies to infinite variation EI processes thanks to Theorem 1. The above theorem was proved when X is a Brownian bridge from 0 to 0 of length 1 by Durrett, Iglehart, and Miller in [DIM77]. The form given above is taken from [CUB15] and is more general but is actually a simple consequence of the results in that paper. What was lacking in the latter reference is the zero-one law at the minimum of our Theorem 2 and, in particular, the fact that all infinite variation EI processes reach their minimum continuously.
Our next application is to extend the description of the convex minorant of a Lévy process of [PUB12] to the EI setting. In the latter reference, it is noted that this description gives another interpretation of a fundamental fact of the fluctuation theory of Lévy processes, namely, the Pečerskiȋ-Rogozin identity of [PR69].
We will consider EI processes which do not have piecewise linear trajectories. By considering the extremal case, this happens if and only if NPL: σ ą 0 or ř i 1 β i ‰0 " 8. We will call these processes of the NPL type. The setting of infinite variation EI processes of Theorem 1 is an important step in the proof.
Definition. The convex minorant of a càdlàg function f : r0, 1s Ñ R is the greatest convex function c that is bounded above by f . The excursion set is the open set O " tt P r0, 1s : f ptq or f pt´q ą cptqu .
Its maximal components, intervals, of the form pg, dq are termed excursion intervals, and they have an associated length d´g, increment f pdq´f pgq, slope pf pdqf pgqq{pd´gq, and excursion eptq " f pg`tq´cpg`tq defined for t P r0, d´gs.
Recall that an upper bounded family of convex functions has a convex supremum, which explains why the convex minorant exists. Let C be the convex minorant of an EI process X of the NPL type. As stated in the next result, its excursion set O " tt P r0, 1s : X t or X t´ą C t u is open and of Lebesgue measure 1. We will consider the following precise ordering of the excursion intervals. Let pV i , i ě 1q be an iid sequence of uniform random variables on r0, 1s and let pg 1 , d 1 q, pg 2 , d 2 q, . . . be the sequence of distinct excursion intervals which are successively discovered by the sequence pV i q. With them, we can define the sequence of lengths, slopes, and excursions`e i˘. We will also consider the partition induced by a stick-breaking scheme based on pV i q. Define S 0 " 0, S n`1 " S n`Ln , and L n`1 " p1´S n qV n`1 .
Then pL i q is the uniform stick-breaking process, and S is the partition of r0, 1s induced by its cumulative sums. Note that this is a very sparse partition of r0, 1s which we can use to analyze X by considering X S i´X S i´1 and the sequence of Knight bridges where K i t is the Knight transform of X´X S i´1 on r0, L i s. The Knight transform of an EI process Y starting at zero on an interval r0, ts and satisfying UM is obtained by first defining the Knight bridge K s " Y s´s Y t {t, letting ρ be the location of its (unique) minimum, to finally define s Þ Ñ K pρ`sq mod t´Kρ^Kρ´f or s P r0, ts.

Theorem 4. Assume that the EI process X satisfies NPL. Then, its excursion set
O is open and of Lebesgue measure 1 and the following equality in law holds: ě1 . Recent papers have used the above description of the convex minorant (in the Lévy process case) to develop an exact simulation method for the maximum of a stable process (found in [GMU18b]) and an approximate simulation method (albeit very efficient; cf. [GMU18a]) for the maximum of Lévy processes whose onedimensional distributions can be sampled exactly. This is particularly relevant to Monte Carlo methods for ruin probabilities with finite and deterministic horizon. In [CM15], the classical Cramer-Lundberg ruin process is generalized to an exchangeable increment process on r0, 8q to relax the independence between claim sizes; these are mixtures of Lévy processes. In contrast to the classical setting, when working under the classical net profit condition, the ruin probability might not converge to zero as the initial capital goes to infinity, and a new net profit condition is needed. In the finite-horizon case, we would be dealing with an EI process of the type considered here; Theorem 4 would give us access to the ruin probabilities.
We end this section with a few comments on the organization of the paper. Our main result is Theorem 1; all others have a simple proof from Theorem 1 and more specialized results from the literature. A brief outline of the proof of Theorem 1, which explains the organization of the paper, is as follows.
‚ DpXq " 8. This follows from results of [Fri72]. ‚ DpXq "´8. We use an exponential change of measure (which reduces to the well-known Esscher transform if X is a Lévy process), with parameters θ P R and T P p0, 1q, to deduce that DpXq " α θ {T`DpX θ q, where X θ is another EI process, α θ is a random variable (not independent of X θ , although for Lévy processes, α θ is deterministic). A lower bound on the probability that an EI process with positive jumps is non-positive (found in [Sat99] for Lévy processes) implies that DpX θ q ď 0. It remains to notice that the infinite variation hypothesis gives us α θ Ñ´8 when θ Ñ´8, thereby proving Theorem 1 in this case.
Step 2. Assume σ ‰ 0 or ř j |β j | " 8; also set α " 0. ‚ DpXq "´8. We note that the aforementioned lower bound is valid when σ ‰ 0 and that it works along deterministic subsequences, so that lim inf nÑ8 X t n {t n ď 0 whenever t n Ó 0 and X has only positive jumps and a Brownian component. We then write X " X pos`X neg , where X pos and X neg are independent, X neg only has negative jumps, and X pos only has positive jumps and contains the Brownian component (if any). If X pos or X neg has finite variation, we use [Kal05, Cor. 3.30] and Step 1. Hence, assume both have infinite variation. We then get a random subsequence T n Ó 0 such that X pos T n {T n Ñ´8 and, using independence, we get lim inf X neg T n {T n ď 0. Hence, we obtain DpXq "´8. ‚ DpXq " 8. Apply the previous case to´X.
The paper is organized as follows: In Section 2 we present a simplified proof following the outline above in the setting of Lévy processes. This is because the exponential change of measure and lower bounds on probabilities discussed above are already known. In Section 3, we consider Theorem 1 in the case of EI processes. Here, we state and prove the exponential change of measure and lower bounds for probabilities. Finally, Section 4 is devoted to the applications of our results and contains the proofs of Theorems 2, 3, and 4.

The Lévy process case
We now illustrate the proof of Theorem 1 in the case of Lévy processes. This proof is the only one published that does not use fluctuation theory for Lévy processes and can be considered to be simpler. It is based on basic facts on Lévy processes and on the Esscher transform. The reader might consult [Ber96] and [Sat99] for these basic facts, some of which we now recall. In particular, Lévy processes satisfy the Blumenthal 0-1 law, and therefore the random variables DpXq and DpXq are actually constant.
Recall that X can be written as the independent sum of two Lévy processes X 1 and X 2 , where X 1 has bounded jumps and X 2 is compound Poisson. Since lim tÑ0 X 2 t {t exists and is finite, we see that it suffices to prove Theorem 1 when X has bounded jumps.
Assume then that the jumps of X are bounded by 1. We can then determine X by its Laplace transform re λx´1´λ xs πpdxq by the Lévy-Kintchine formula. Let X be a Lévy process whose paths have infinite variation; equivalently, we assume that The Lévy measure π, which is concentrated on r´1, 1s, satisfies ż x 2 πpdxq ă 8.
In other words, the characteristic triplet of X is pα, σ, πq.
The following result is a trivial extension of the well-known Esscher change of measure for Lévy processes, as found in [Kyp14]. It will imply that the superior and inferior limits in Theorem 1 are not finite. Let F t " σpX s : s ď tq.
Proposition 5 (Esscher transform). Fix θ P R. Define the measure Q by its restriction to F t by Then, under Q, the stochastic process X is a Lévy process whose Laplace exponent is Ψ θ pλq " Ψpλ`θq´Ψpθq .
Note that α θ Ñ´8 as θ Ñ´8 when ş |x| πpdxq " 8 or σ 2 ą 0. We now specialize to the spectrally positive case and then use a (simple) argument to deduce the general case.
2.1. The spectrally positive case. We now focus on the spectrally positive case, which corresponds to when π is concentrated on r0, 1s. When X is spectrally positive, a general result of Fristedt implies that lim sup tÑ0 |X t | {t " 8; cf. the proof of part A of Theorem 1 in [Fri72]. Since X t {t is a reverse martingale with no positive jumps (when t decreases) which does not converge (because of the preceding phrase), Proposition 7.19 in [Kal02] tells us that DpXq " 8.
To prove that DpXq "´8, we use the following result of Sato for spectrally positive Lévy processes. Lemma 6. If X is a spectrally positive Lévy process of infinite variation with parameters pα, σ, πq with jumps bounded by 1 and α " EpX 1 q ď 0, then PpX t ď 0q ě 1{16 for all t ě 0. Also, for any deterministic sequence t n Ñ 0, The first statement is Proposition 46.8 in [Sat99]; the proof is simple and based on inequalities of the Paley-Zygmund type (based on exponentials of X) and on properties of the Laplace exponent. However, it should be noted that the theorem cannot hold for all α (consider α Ñ 8, so that PpX t ď 0q Ñ 0) and that the proof is valid when α ď 0. The second statement is found in the penultimate paragraph of the proof of Theorem 47.1 [Sat99] and basically follows from the Borel-Cantelli lemma and the first part of Lemma 6. However, in the proof, one uses that α ď 0, so that it remains valid. We re-prove the lemma in the EI setting using Lemma 11 below.
Proof of Theorem 1 for totally asymmetric Lévy processes. As before, we restrict ourselves to the spectrally positive case with jumps bounded by 1.
Note in particular that the above result implies that DpXq ď 0 for any spectrally positive Lévy process as in its statement. On the other hand, by absolute continuity, we see that DpXq takes the same constant value both under P and under Q. Hence, we can write DpXq " α θ`D pXq, whereX is a spectrally positive Lévy process of characteristics p0, σ, π θ q. By the preceding lemma, DpXq ď 0. As we remarked, α θ Ñ´8 as θ Ñ´8, so that DpXq "´8. We deduce that for all α P R, DpX`α Idq "´8.
2.2. The general case. Let X be a Lévy process of infinite variation and bounded jumps. If suffices to prove DpXq "´8 for any such process and then apply this to´X to conclude that also DpXq " 8. Using the Lévy-Itô decomposition, we write it as X " X neg`X pos , where X neg and X pos are independent Lévy processes, X neg is spectrally negative and X pos is spectrally positive. This can be achieved with EpX pos t q " 0 so that Lemma 6 applies. In particular, from Theorem 1 for totally asymmetric Lévy processes (proved in Subsection 2.1), DpX neg q "´8. Hence, there exists a random sequence V n Ó 0 such that X neg V n ď´nV n . Since X neg is independent of X pos and X pos is spectrally positive, Lemma 6 implies that lim inf nÑ8 X pos V n {V n ď 0. We then conclude that

Dini derivatives of EI processes in the totally asymmetric case
In this section, we prove Theorem 1; it suffices to prove it for the extremal EI process and obtain the general case by mixing. For concreteness, we assume that X is extremal and has only positive jumps, so that β i ě 0.
We first show that Dini derivatives are constant.

OSVALDO ANGTUNCIO HERNÁNDEZ AND GERÓNIMO URIBE BRAVO
Proof. Fix any k P N, and define X Hence, for any t ă mintU i : i P rksu we have σpb s : s ď εq .
The (local) absolute continuity of the Brownian bridge with respect to Brownian motion and the Blumenthal zero-one law for the latter imply that G is trivial. Let F k " σpU k , U k`1 , . . .q and note that G is independent of (any σ-algebra but in particular) F k . As noted in the proof of [Ber96, Prop. I, §2.4], the argument for Kolmogorov's zero-one law tells us that Ş k G _ F k is trivial. Since the right-hand side of (4) is G _ F k -measurable, we deduce that the left-hand side is Ş k G _ F kmeasurable and therefore trivial. A similar argument works for the lower Dini derivative.
We will proceed as in the case of Lévy processes: we first give a change of measure for EI processes, analogous to the Esscher transformation, which has the effect of transforming the drift and the jumps. As in the Lévy case, DpXq " 8 follows from simple results in the literature. We then use martingale arguments to prove that DpXq ď 0. Finally, our change of measure will imply that DpXq "´8.
Proposition 8 (Change of measure). Let X " pX t , t P r0, 1sq be an extremal EI process with (deterministic) characteristics pα, σ, βq, defined on the probability space pΩ, F , Pq. Then E`e θX t˘" e αθt`θ 2 σ 2 tp1´tq{2 for all t P r0, 1s and θ P R. Fix T P p0, 1q, θ P R, and let F t " σpX s : s ď tq. Define Q on F T by Under Q, the stochastic process pX t , t ď T q is an EI process of the form tP r0, T s, whose (random) characteristics pα θ , σ θ , β θ q have the following law. Let pB j q be independent Bernoulli random variables with parameter p j given by Then where ř j β j rB j´T s converges almost surely and in L 1 . If α " 0 and either σ ą 0 or ř i 1 β i ‰0 " 8, then E`e λX t˘Ñ 8 as λ Ñ 8. Remark. Typically, the random parameters also depend on T . Since T is fixed throughout, this dependence is not made explicit.
Remark. When X is a finite variation EI process, driven by the two parameters pα, βq (rather than pα, 0, βq, ) as explained in Section 1, then X under Q is also of finite variation and is driven by the two parameters pαT, β θ q. Hence, D`X θ˘" D`X θ˘"α , which does not depend on θ; the interpretation is that, in this case, the change of measure is not adding a drift. If we choose not to reparametrize with pα, β θ q, the finite variation case is characterized by the fact that α θ is bounded in θ.
On the other hand, if X is of infinite variation, we shall see that α θ stochastically increases from´8 to 8.
Proof. Let us begin with the finitude of the moment generating function of X. Use the canonical representation X t " αt`σb t`ř i β j " 1 U j ďt´t ‰ . Define φpt, θ, βq " e´θ β j t r1´ts`e θβ j p1´tq t.
Hölder's inequality implies the log-convexity of θ Þ Ñ E`e θX t˘. To prove uniform integrability of´exp´θ ř jďn β j " 1 U j ďt´t ‰¯¯n , note that this is bounded in L 2 since ś 8 j"1 φpt, 2θ, β j q ă 8. This together with the convergence of ř jďn β j " 1 U j ďt´t ‰ to X´α Id´σb, and the independence of the latter with b, implies the stated infinite product formula for the moment generating function of X t .
Consider now a sequence V " pV j q of independent uniform p0, 1q random variables which is independent of b and U " pU j q, and also define B j " 1 V j ďp j . Note that obviously ř β 2 j B 2 j ă 8. Regarding α θ , we use the Kolmogorov three series theorem. Indeed, since the sequence pβ j q is bounded, so is the sequence pβ j rB j´T sq. On the other hand, we have Finally, we see that

OSVALDO ANGTUNCIO HERNÁNDEZ AND GERÓNIMO URIBE BRAVO
Consider the EI process X θ on r0, T s, given in the statement of the proposition, with random characteristics pα θ , σ θ , β θ q. We finish the proof by comparing, through moment generating functions, the finite-dimensional distributions of the increments of X under Q and of X θ under P.
First of all, by independence of U and b, since the law of b under e θb T¨P equals that of b`σθ Id (as can be proved through the Gaussian character of b), it suffices to prove the theorem when σ " 0. Since α is deterministic, it also suffices to consider α " 0.
Let 0 " t 0 ď t 1 ď¨¨¨ď t n " T and λ 1 , . . . , λ n P R. Using arguments similar to justifying the exchange of expectation and infinite products in the computation of the generating function, we first see that Therefore, the Laplace transform of pX t 1 , . . . , X t n q is finite under Q and denoting by E Q the expectation under Q and E P the expectation under P, we have By considering the interval of the partition rt i´1 , t i s on which U j falls and recalling the definition of p j in (5), we get Recall that α and σ are zero in the definition of α θ and X θ . On the other hand, using the definition of X θ , we can use the distributional assumptions on B and U (first independence, then conditioning on B, and finally considering the interval rt i´1 , t i q on which U j falls) to obtain ff .
The preceding equation shows that the increments of the left-hand side have the same law as the corresponding increments of β j r1 U j ď¨´¨s under Q for every j. Thus, Similarly to the formula for the Laplace transform of X t just obtained, Hence the finite-dimensional distributions of X under Q and X θ under P are the same. The last part of the statement follows from the fact that if ξ is any random variable on R with finite generating function g, then gp8q " 8 whenever Ppξ ą 0q ą 0. The hypotheses on X are chosen so that PpX t ą 0q ą 0 for all t P p0, 1q. Indeed, when α " 0, pX t {p1´tq, t ă 1q is a martingale. The assumption PpX t ą 0q " 0 implies that E`Xt˘" 0, which then gives PpX t " 0q " 1, and our hypotheses imply that X t has no atoms for t P p0, 1q as shown in the proof of Lemma 1.2 in [Kni96].
We now consider the behavior of the drift α θ as a function of θ.
Proof. We have already proved that α θ is a convergent series (plus the couple of constants α and θσ); it is absolutely divergent in the infinite variation case and otherwise absolutely convergent. Using our explicit construction of the random variables B j as 1 V j ďp j and the definiton of p j in (5), we note that the β j rB j´T s are increasing in θ, which implies the same for α θ .
Recall that α θ is (modulo a constant) a series of independent random variables taking two values whose means and variances are summable. Hence α θ P L 1 and The above summands are O`θβ 2 j˘, uniformly for θ on compact sets. This implies the continuity of θ Þ Ñ E`α θ˘. But the mapping θ Þ Ñ β j e θβ j´1 T e θβ j`p 1´T q is strictly increasing, and monotone convergence implies the same for θ Þ Ñ E`α θ˘. Finally, note that the preceding function of θ goes to˘β j as θ Ñ˘8. When X is of infinite variation, Fatou's lemma can be applied to the series for E`α θ˘, as the summands in its definition are either all positive or all negative, and we conclude that E`α θ˘Ñ˘8 as θ Ñ˘8.
We now give a version of Lemma 6 for EI processes, as well as a simple lemma which uses it.
Proof. Assume we have proved that for λ ą 0 and t ď 1{2. Then, using the Cauchy-Schwarz inequality we would obtain Epe λX t q 4 for λ ą 0. By Proposition 8 we can choose λ t such that E`e λ t X t˘" 2, which implies that Now, let us prove (6). First note that (6) is an equality for a Brownian bridge. Hence, by the independence of the latter with the (purely discontinuous) jump part of X, it is enough to assume that σ " 0. Defining φ j pλq :" ln E´e λβ j r1 U j ďt´t s¯: " ln ψ j pλq, it is enough to prove for every j P N that (7) 0 ď 4φ j pλq´φ j p2λq, λě 0.
Since the right-hand side is zero when λ " 0, proving it has a non-negative derivative implies equation (7). Taking the derivative with respect to λ, we need to prove that which is equivalent to 0 ě 2 e λβ j´1 te λβ j`1´t´e 2λβ j´1 te 2λβ j`1´t and further equivalent, since the denominators are positive by convexity of the exponential function, to 1´t`pt´2qe λβ j`p 1`tqe 2λβ j´t e 3λβ j ě 0.
As before, the left-hand side at λ " 0 is zero; hence, it suffices to prove its derivative is non-negative. We apply an analogous reasoning by evaluation at λ " 0, differentiation, and division by β j e λβ j (which is negative) three times! The sequence of derivatives, taking out the factor β j e λβ j , is t´2`2p1`tqe λβ j´3 te 2λβ j , 2p1`tq´6te λβ j , and The penultimate function is non-negative at λ " 0 when t P r0, 1{2s. The last expression then shows that the penultimate one is non-negative, which we can then bootstrap to show inequality (7).
The choice of λ t such that f pλ t q " 2 in the preceding proof seems arbitrary. The reader can check it gives the best bound obtainable by this method.
Lemma 11. Let X " pX t , t ě 0q be a càdlàg process such that for some sequence t n Ó 0, the random variable lim inf n X t n {t n is constant. Assume that for some ε, c ą 0, PpX t ď 0q ą c for every t P r0, εs. Then, lim inf n X t n {t n ď 0.
Remark. Note that the above can be applied when the augmented initial σ-algebra of X, given by Ş εą0 σpX s : s ď εq, is trivial (hence for Feller processes) or in the case of extremal EI processes by mimicking the proof of Proposition 7. In this case, we can apply the result to any sequence t n Ó 0.
Also, by mixing, we deduce from the above two lemmas that if X is an EI process with random characteristics pα, σ, βq, where α ď 0 and β i ě 0 almost surely, then lim inf n X t n {t n ď 0 almost surely for any sequence t n Ó 0.
Proof. By contrapositive, assume that lim inf n X t n {t n ą 0 almost surely. Then PpX t n ď 0 for some n ě N q " PpX t n ď 0 infinitely oftenq " Pˆlim inf n X t n t n ď 0˙" 0.
We are now ready to prove Theorem 1 in the case of totally asymmetric EI processes. The proof is similar to the one for (spectrally positive) Lévy processes given in Section 2.
Since ř i β i " 8 and β i Ó 0, for any ε, κ ą 0 we get ÿ The Borel-Cantelli lemma implies that for all κ ą 0, almost surely, for every ε ą 0, both ΔX U i ě κU i and U i ď ε infinitely often with probability 1. It follows that, almost surely, for any ε ą 0, for infinitely many t P p0, εq we have |ΔX t | ą κt, which in turn implies |X t | ą κt{2 or |X t´| ą κt{2 for any such t. Since k was arbitrary, we have lim tÓ0 |X t |{t " 8. Note that pX t {t, t P p0, 1sq is a backward martingale (here, it is important that we restrict ourselves to the case α " 0) without positive jumps. Note that it does not converge, thanks to the preceding paragraph. The process N " p´X´t{t, t P r´1, 0qq is therefore a martingale without positive jumps (divergent almost surely). If we define τ c as the first time N reaches c P R`, then pc´N´t^τ c ,´1 ď t ă 0q is a non-negative martingale. By the martingale convergence theorem N¨^τ c converges a.s. to a finite limit as t Ò 0. If τ c were infinite, N itself would therefore converge; hence, τ c is almost surely finite. Since c was arbitrary, lim tÒ0 N´t " 8, which implies that lim tÓ0 X t {t " 8. (The above argument was taken from [Ber01] and [Fri72]). We have proved that DpXq " 8 for any extremal EI process with parameters pα, 0, βq of infinite variation when β i ě 0 for all i. Taking mixtures, we can let α and β be random, as long as ř i β i " 8 almost surely. We use this remark in the following paragraph.
We now prove that DX "´8. We now apply a change of measure (through Proposition 8) to X; call the resulting measure Q θ to stress the dependence on θ. Write α θ Id {T`Y θ for the (random parameter) EI process whose law is Q θ . Recall from Proposition 7 that DX is a constant. Since Q θ is absolutely continuous with respect to P, then DpXq " α θ {T`D`Y θ˘. Even if Y θ has random parameters, the jumps are almost surely positive. The remark following Lemmas 10 and 11 implies that D`Y θ˘ď 0 almost surely. Taking expectations we see that as θ Ñ´8, since E`α θ {T˘Ñ´8 by Proposition 9.
Proof of Theorem 1. As before, assume that α " 0 and focus only on the statement DpXq " 8, since we can at the end apply it to´X. Write X " X pos`X neg , where X pos and X neg are independent extremal EI processes with parameters p0, σ, β pos q and p0, 0, β neg q, where β pos are the positive terms of β and β neg the negative ones. We have proved Theorem 1 for X pos and for X neg if they are of infinite variation. If one of them is of finite variation, then the other one must be of infinite variation, and then Theorem 1 holds for X. Hence, we can assume that both X pos and X neg are of infinite variation.
But then, there exists a random sequence T n Ó 0 such that X neg T n {T n Ñ´8, thanks to Theorem 1, for spectrally negative EI processes (just proved). Since pT n q is independent of X pos , we can apply Lemmas 10 and 11 to conclude that lim inf n X pos T n {T n ď 0. We conclude that DpXq ď lim inf n X T n {T n "´8.

Further applications
We now move on to the applications of Theorem 1, which were stated as Theorems 2, 3, and 4. We already mentioned that Theorem 3 follows from the same arguments as in [CUB15] once we have Theorem 1. Again, it suffices to prove the theorems for extremal EI processes.

An extension of Millar's zero-one law at the minimum.
To prove Theorem 2, we will use a representation of the post minimum process associated to an EI process found in [Ber93], which is now recalled.
Let X be an extremal EI process with parameters pα, σ, βq. According to [Kal05, Chapter 2], such a process is a semimartingale. Let τ " suptt P r0, 1s : X 1 " X t^Xt´u be the time of the ultimate infimum. Define the post-infimum process X Ý Ñ as (where : is a cemetery state) and the reversed pre-infimum process X Ð Ý as We introduce two processes, X Ò and X Ó , as in [Ber93]. Since X is a semimartingale, it has a semimartingale local time at zero denoted L. This local time is actually zero unless σ ą 0, in which case Consider the time spent at p0, 8q and p´8, 0s up to time t of X, that is, and consider also their right-continuous inverses α˘ptq " infts : As ą tu. It can be seen by a picture that using the time change α`on X consists of erasing the jumps of X that fall on p´8, 0s and closing up the gaps (similarly for α´). The process of juxtaposition of the excursions in p0, 8q is given by We remark that an excursion in p0, 8q includes the possible initial positive jump across 0 and excludes the possible ultimate negative jump across 0. The process of juxtaposition of the excursions in p´8, 0s is given by By establishing a bijection for discrete-time EI processes and passing to the limit, Bertoin obtains the following result.
Theorem 12 (Theorem 3.1 in [Ber93]). Let X be an extremal EI process on r0, 1s with parameters pα, σ, βq. Then, the following equality in law holds: We first establish the following simple result for EI processes.
Lemma 13. When X satisfies UM, we have P`X U j or X U j´" 0˘" 0 for every j P N.
Proof. From the proof of Lemma 1.2 in [Kni96] we see that UM implies that PpX t " xq " 0 for any t P p0, 1q and x P R. Fix any j P N and define X j t " The statement for the left limit follows by time-reversibility.
Proof of Theorem 2. Let X be an EI process satisfying UM. The previous lemma tells us that X does not jump into or from 0. Assume that X is irregular upward. Then, X remains negative up to the time τ0 " inftt ą 0 : X t ą 0u, which is strictly positive. We actually have τ0 ă 1 since otherwise the only positive value that X has to take, by assumption, would be taken at time 1, by a jump, and X does not jump at 1. The trajectory of X up to τ0 might comprise several excursions below zero, and we will be interested in the first one, which ends at the random time T " min tt ą 0 : X t ě 0u ď τ0 . Recall the definition of τ as the time of the last minimum. Let us prove that where the equality in distribution holds by Theorem 12. Since we do not jump into or from 0, ΔX τ0 " 0 if and only if X τ0 " 0 and X is continuous at τ0 . Assume that ΔX τ0 " 0 has positive probability. Then the reversed pre-infimum process would hit zero twice (and the process X would hit its infimum twice); this is impossible under UM. Indeed, note that X " X Ó on r0, T s and that from the construction of the pre-minimum process in Theorem 12, T has the same distribution as S where " τ´sup ts ă τ : X t´" X 1 u .
When ΔX τ0 " 0, then T ą 0 and X T " 0. Hence, with positive probability, we would have that S ă τ and X S´" X 1 , so that the minimum of X is reached at least twice. The contradiction follows from negating (8), which proves its validity. Conversely, assume X jumps from its infimum with positive probability. Then equation (8) holds true (though only with positive probability). Since X is continuous at zero, τ0 P p0, 1q, which implies that X is in p´8, 0s on p0, τ0 q. This means X is irregular upward with positive probability. Being irregular upward is a tail event for the uniform random variables defining X; therefore, its probability is zero or one.
Using similar arguments we can prove X is irregular downward if and only if X jumps to its infimum.
Finally, Theorem 1 shows that X is both regular upward and downward when it is of infinite variation, so that X reaches its minimum continuously. 4.2. EI processes conditioned to remain positive. The aim of this subsection is to prove Theorem 3. As before, let X be an extremal EI process with parameters p0, σ, βq. Assume that X is both upward and downward regular. Since α " 0, X has either infinite activity ( ř i 1 β i ‰0 " 8) or a Gaussian component (σ ą 0). Otherwise, X would have piecewise linear trajectories with the same slope ř i β i , but then X would not be either upward or downward regular. Hence, X satisfies UM; let ρ be the unique time X reaches its minimum. Theorem 3 tells us that X is continuous at ρ. Corollary 3.1 in [CUB15] tells us that under these hypotheses, the law of X conditioned to remain above´ε converges weakly to the Vervaat transform of X, given by X ρ`¨´Xρ . What was needed in the above cited corollary were conditions that would allow one to apply it, and we have identified them in terms of regularity of both half-lines. In the particular case when X is of infinite variation, Theorem 1 tells us that X is both upward and downward regular and that therefore the conclusion of Theorem 3 is satisfied.
The reader might wonder why we had to impose α " 0. The reference [CUB15] has a description of what could be the limit when α ą 0 and β " 0 (that is, for a Brownian bridge from 0 to α). The candidate for a limit is described as a random shift, just as the Vervaat transformation for the case α " 0, but it needs a bicontinuous family of (non-zero!) local times in its definition. Defining such a process for an EI process is an open problem; semimartingale local times are only non-zero when σ ą 0, so a different approach is needed. Note that a limit theorem is not provided in [CUB15].
4.3. The convex minorant of EI processes. Let X be an extremal EI process with parameters pα, σ, βq.
To prove Theorem 4, we will rely strongly on [PUB12]. First, we establish some basic properties of the convex minorant in analogy with [PUB12, Prop. 1]. They will be fundamental in applying a transformation in Skorohod space, which is continuous on paths satisfying the conclusion.

Proposition 14.
Assume that X satisfies NPL, and let C be the convex minorant of X. Then: (1) The open set O " tt P r0, 1s : C t ă X t^Xt´u has Lebesgue measure 1.
(2) For every connected component pg, dq of O, ΔX g ΔX d ě 0. If X has infinite variation, then ΔX g ΔX d " 0. (3) If pg 1 , d 1 q and pg 2 , d 2 q are connected components of O, then The proof of the above proposition is almost the same as the corresponding one in [PUB12]. We just need to apply different results. For example, the fact that when X has finite variation, DpXq " DpXq "α (in the parametrization for this case), which is found in [Ber96,Prop. 4 [Kni96], which also contains the fact that the minimum is reached in a unique place under NPL (which implies UM). One also needs our extensions of Millar's results stated in Theorem 2, as well as the fact that lim inf at any jump time U i of X. This follows from 1 applied to pX t´βi r1 U i ďt´t sq.
To prove Theorem 4, we will use the following path transformation that leaves the laws of EI processes invariant.
Theorem 15. Let X be an extremal EI process of parameters pα, σ, βq satisfying NPL. Define its convex minorant C and the open set of excursion intervals O as before. Let U be a uniform random variable on p0, 1q independent of X and consider the connected component pd, gq of O that contains U . Define the 3214 transformation X U of X by means of ' % X U`t´XU , 0 ď t ă d´U, C d´Cg`Xg`t´pd`Uq´XU , d´U ď t ď d´g, C d´Cg`Xt´pd´gq , d´g ď t ă d, X t , d ď t ď 1.
Then, pU, Xq d " pd´g, X U q.
Remark that U belongs almost surely to O, since the latter has Lebesgue measure 1 by Proposition 14.
The above path transformation can be understood as follows: the random variable U is used to select a face of the convex minorant of X, with endpoints g and d. This divides the trajectory into 4 parts, say 1, 2, 3, and 4, which are then rearranged as 3, 2, 1, 4. Parts 1 and 4 have the same convex minorant as X, with the selected face removed. Parts 3 and 2 are interpreted as an inverse Vervaat transformation; the original trajectory 2 and 3 can be obtained as the Vervaat transform of the Knight bridge of X U on r0, d´gs. One of the consequences is that d´g has the same law as U , which is a remarkable universality result for exchangeable increment processes and is responsible for the stick-breaking process of Theorem 4. Indeed, we just need to iterate the path transformation on parts 1 and 4 of the trajectory of X U . Therefore, Theorem 4 follows from Theorem 15, whose proof we now sketch, being very similar to the proof for Lévy processes of [PUB12]. It is based on an analogous path transformation for discrete time EI processes stated in [AP11, Thm. 8.1] or [APRUB11, Lem. 7]; the proof of the latter is by means of a bijection between permutations. To pass to the limit, one uses the continuity of the path transformation, on Skorohod space, whenever the trajectory satisfies the basic properties of Proposition 14; see Section 6.3 of [PUB12]. Continuity of the path transformation is much more simple when X is of infinite variation, since then X is continuous at g and d. See Section 6.2 of [PUB12].
We end the paper with an explanation of the distributional description of the maximum (or minimum, after multiplication by´1) of an EI process, which in discrete time is displayed in equation (2), and how it proves the celebrated formula due to M. Kac, which in discrete time is equation (1). Indeed, note that the infimum X 1 of X on r0, 1s is the sum of the increments of the convex minorant that are negative. Thanks to Theorem 4, this gives us the equality in law ‰´.
Next, conditioning on the stick-breaking process L, we see that where f pr, sq " EprX s´Xr s´q for r ă s. However, exchangeability implies that f pr, sq " f p0, s´rq, so that where gplq " f p0, lq. Finally, recall that the uniform stick-breaking process is invariant under size-biased permutations. Indeed, it is itself a size-biased permutation of a non-decreasing sequence; cf. Applying the above result to hplq " gplq{l gives EpX 1 q "