Skip to main content
Log in

The fundamental theorem of asset pricing under transaction costs

  • Published:
Finance and Stochastics Aims and scope Submit manuscript

Abstract

This paper proves the fundamental theorem of asset pricing with transaction costs, when bid and ask prices follow locally bounded càdlàg (right-continuous, left-limited) processes.

The robust no free lunch with vanishing risk condition (RNFLVR) for simple strategies is equivalent to the existence of a strictly consistent price system (SCPS). This result relies on a new notion of admissibility, which reflects future liquidation opportunities. The RNFLVR condition implies that admissible strategies are predictable processes of finite variation.

The Appendix develops an extension of the familiar Stieltjes integral for càdlàg integrands and finite-variation integrators, which is central to modelling transaction costs with discontinuous prices.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. Henceforth, θ τ,θ τ ,θ τ+ denote respectively the left limit, the value, and the right limit of θ at τ.

  2. Lenglart [27] and, independently, Galtchouk [12, 13] offer another definition of an integral with respect to a finite-variation process, whereby \(\int_{0}^{t} S\, d\theta=\int_{0}^{t} S\,d\theta_{+} - S_{t} (\theta _{t+}-\theta_{t})\). Their definition differs from the one introduced here, and does not fit the interpretation of ∫S as a cost process. Take for example S=1[0,1)+2[1,∞) and θ=1[1]. The asset jumps at time 1, and the strategy buys a share immediately before the jump, only to sell it right after the jump; hence the cost \(\int_{0}^{t} S\,d\theta\) should equal −1 for t>1, reflecting a gain of 1. Yet, under their definition, \(\int_{0}^{t} S\,d\theta \) is zero. By contrast, the predictable Stieltjes integral developed in this paper entails that ∫S=1[1]−1(1,∞), in accordance with accounting rules.

  3. Note that in the present paper there are also transaction costs represented by the process κ; hence the additional cost term ∫[0,T] κ u dθ u appears, see Definition 4.4. Remember also that ∫[0,T] S t t (as well as ∫[0,T] κ u dθ u ) represent the cost of trading with strategy θ on the interval [0,T]; cf. Definition 4.4.

  4. The clock σ n ticks each time that the total variation ∥θ∥ increases by at least ε m , stopping at ρ m .

  5. Set C 0=∅, \(\pi'_{n}=\inf\{ t: (\omega,t)\in B\setminus C_{n}\}\), and , which is predictable because BC n is predictable and , by Jacod and Shiryaev [18], I.2.38.

  6. Note that a pathwise application of Corollary A.14 does not prove Proposition A.10, which requires the predictability of θ″.

References

  1. Black, F., Scholes, M.: The pricing of options and corporate liabilities. J. Polit. Econ. 81, 637–654 (1973)

    Article  Google Scholar 

  2. Campi, L., Schachermayer, W.: A super-replication theorem in Kabanov’s model of transaction costs. Finance Stoch. 10, 579–596 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  3. Cherny, A.: General arbitrage pricing model. II. Transaction costs. In: Séminaire de Probabilités XL. Lecture Notes in Math., vol. 1899, pp. 447–461. Springer, Berlin (2007)

    Chapter  Google Scholar 

  4. Choulli, T., Stricker, Ch.: Séparation d’une sur- et d’une sousmartingale par une martingale. In: Séminaire de Probabilités, XXXII. Lecture Notes in Math., vol. 1686, pp. 67–72. Springer, Berlin (1998)

    Chapter  Google Scholar 

  5. Dalang, R.C., Morton, A., Willinger, W.: Equivalent martingale measures and no-arbitrage in stochastic securities market models. Stoch. Stoch. Rep. 29, 185–201 (1990)

    MathSciNet  MATH  Google Scholar 

  6. Delbaen, F., Schachermayer, W.: A general version of the fundamental theorem of asset pricing. Math. Ann. 300, 463–520 (1994)

    Article  MathSciNet  MATH  Google Scholar 

  7. Delbaen, F., Schachermayer, W.: The fundamental theorem of asset pricing for unbounded stochastic processes. Math. Ann. 312, 215–250 (1998)

    Article  MathSciNet  MATH  Google Scholar 

  8. Delbaen, F., Schachermayer, W.: The Mathematics of Arbitrage. Springer Finance. Springer, Berlin, (2006)

    MATH  Google Scholar 

  9. Dellacherie, C., Meyer, P.-A.: Probabilities and Potential. North-Holland Mathematics Studies, vol. 29. North-Holland, Amsterdam (1978)

    MATH  Google Scholar 

  10. Dellacherie, C., Meyer, P.-A.: Probabilities and Potential. B. North-Holland Mathematics Studies, vol. 72. North-Holland, Amsterdam (1982)

    MATH  Google Scholar 

  11. Dybvig, P.H., Ross, S.A.: Arbitrage. The New Palgrave: A Dictionary of Economics 1, 100–106 (1987)

    Google Scholar 

  12. Galtchouk, L.I.: Optional martingales. Math. USSR Sb. 40, 435–468 (1981)

    Article  Google Scholar 

  13. Galtchouk, L.I.: Stochastic integrals with respect to optional semimartingales and random measures. Teor. Veroâtn. Ee Primen. 29, 93–107 (1984)

    Google Scholar 

  14. Guasoni, P.: Optimal investment with transaction costs and without semimartingales. Ann. Appl. Probab. 12, 1227–1246 (2002)

    Article  MathSciNet  MATH  Google Scholar 

  15. Guasoni, P., Rásonyi, M., Schachermayer, W.: The fundamental theorem of asset pricing for continuous processes under small transaction costs. Ann. Finance 6, 157–191 (2010)

    Article  MATH  Google Scholar 

  16. Harrison, J.M., Kreps, D.M.: Martingales and arbitrage in multiperiod securities markets. J. Econ. Theory 20, 381–408 (1979)

    Article  MathSciNet  MATH  Google Scholar 

  17. Harrison, J.M., Pliska, S.R.: Martingales and stochastic integrals in the theory of continuous trading. Stoch. Process. Appl. 11, 215–260 (1981)

    Article  MathSciNet  MATH  Google Scholar 

  18. Jacod, J., Shiryaev, A.N.: Limit Theorems for Stochastic Processes, 2nd edn. Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences], vol. 288. Springer, Berlin (2003)

    MATH  Google Scholar 

  19. Jouini, E., Kallal, H.: Martingales and arbitrage in securities markets with transaction costs. J. Econ. Theory 66, 178–197 (1995)

    Article  MathSciNet  MATH  Google Scholar 

  20. Kabanov, Yu.M.: Hedging and liquidation under transaction costs in currency markets. Finance Stoch. 3, 237–248 (1999)

    Article  MathSciNet  MATH  Google Scholar 

  21. Kabanov, Yu.M., Last, G.: Hedging under transaction costs in currency markets: a continuous-time model. Math. Finance 12, 63–70 (2002)

    Article  MathSciNet  MATH  Google Scholar 

  22. Kabanov, Yu.M., Stricker, Ch.: The Harrison-Pliska arbitrage pricing theorem under transaction costs. J. Math. Econ. 35, 185–196 (2001)

    Article  MathSciNet  MATH  Google Scholar 

  23. Kabanov, Yu.M., Stricker, Ch.: A teachers’ note on no-arbitrage criteria. In: Séminaire de Probabilités, XXXV. Lecture Notes in Math., vol. 1755, pp. 149–152. Springer, Berlin (2001)

    Chapter  Google Scholar 

  24. Kabanov, Yu.M., Rásonyi, M., Stricker, Ch.: No-arbitrage criteria for financial markets with efficient friction. Finance Stoch. 6, 371–382 (2002)

    Article  MathSciNet  MATH  Google Scholar 

  25. Kramkov, D.O.: Optional decomposition of supermartingales and hedging contingent claims in incomplete security markets. Probab. Theory Relat. Fields 105, 459–479 (1996)

    Article  MathSciNet  MATH  Google Scholar 

  26. Kreps, D.M.: Arbitrage and equilibrium in economies with infinitely many commodities. J. Math. Econ. 8, 15–35 (1981)

    Article  MathSciNet  MATH  Google Scholar 

  27. Lenglart, E.: Tribus de Meyer et théorie des processus. In: Séminaire de Probabilité, XIV. Lecture Notes in Math., vol. 784, pp. 500–546. Springer, Berlin (1980)

    Google Scholar 

  28. Rásonyi, M.: A remark on the superhedging theorem under transaction costs. In: Séminaire de Probabilités XXXVII. Lecture Notes in Math., vol. 1832, pp. 394–398. Springer, Berlin (2003)

    Chapter  Google Scholar 

  29. Ross, S.A.: Return, risk and arbitrage. In: Friend, I., Bicksler, J. (eds.) Risk and Return in Finance. Ballinger, Cambridge (1977)

    Google Scholar 

  30. Rudin, W.: Principles of Mathematical Analysis, 3rd edn. McGraw-Hill, New York (1976)

    MATH  Google Scholar 

  31. Schachermayer, W.: Martingale measures for discrete-time processes with infinite horizon. Math. Finance 4, 25–55 (1994)

    Article  MathSciNet  MATH  Google Scholar 

  32. Schachermayer, W.: The fundamental theorem of asset pricing under proportional transaction costs in finite discrete time. Math. Finance 14, 19–48 (2004)

    Article  MathSciNet  MATH  Google Scholar 

  33. Yan, J.-A.: Caractérisation d’une classe d’ensembles convexes de L 1 ou H 1. In: Séminaire de Probabilité, XIV. Lecture Notes in Math., vol. 784, pp. 220–222. Springer, Berlin (1980)

    Google Scholar 

Download references

Acknowledgements

We are indebted to Walter Schachermayer for many stimulating discussions which inspired this paper. We also thank Friedrich Hubalek for useful comments, and Martin Schweizer for a careful reading, which helped correct some gaps in a earlier version.

Partially supported by NSF (DMS-0807994 and DMS-1109047), SFI (07/MI/008, 07/SK/M1189, 08/SRC/FMC1389), the ERC (278295), FP7 (RG-248896), the Hungarian Science Foundation (OTKA, F 049094), and the Austrian Science Fund (FWF, P 19456), at the Financial and Actuarial Mathematics Research Unit of Vienna University of Technology.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Paolo Guasoni.

Appendices

Appendix A: Predictable Stieltjes integral

The usual Stieltjes integral ∫S is well defined for continuous integrands S and integrators θ of finite variation. This section defines an extension of the Stieltjes integral to càdlàg integrands, which makes the integral operator continuous with respect to the pointwise convergence of integrators.Footnote 2

The discussion is divided into three parts. The first subsection recalls some properties of predictable finite-variation processes, available in most textbooks under the extra assumption of càdlàg processes, relaxed here to làdlàg. The second subsection defines the integral and establishes its Lebesgue- and Fatou-type properties. Since the integral is defined pathwise, the first two subsections are probability-free, requiring only a filtered measurable space with a right-continuous filtration. Thus, the special case of a deterministic integrand and integrator is included in this setting. The third subsection establishes an approximation result for the predictable integral, which requires an underlying probability measure, since it relies on the decomposition of a stopping time into its accessible and totally inaccessible parts. This part requires a filtered probability space satisfying the usual conditions.

1.1 A.1 Predictable finite-variation processes

This subsection recalls some properties of finite-variation processes, dropping the extra assumption of right-continuity which appears in most textbooks.

Consider a measurable space , endowed with a right-continuous filtration such that . On the set Ω×ℝ+, the predictable σ-algebra is generated by the class of adapted and left-continuous processes. Denote by the class of predictable processes with pathwise finite variation, by the class of adapted càdlàg processes and by the class of adapted làglàd (right-limited and left-limited) processes. Since a function of finite variation has right and left limits at all points, it follows that . A sequence of processes X n converges pointwise to a process X if X n(ω,t)→X(ω,t) for all (ω,t)∈Ω×ℝ+.

Proposition A.1

Let and letθdenote its pathwise total variation. Then:

  1. (i)

    ;

  2. (ii)

    if converges pointwise to θ, then

    $$ \|\theta\| \le\liminf_{n\rightarrow\infty}\bigl\|\theta^n\bigr\| \quad\mathit{pointwise}. $$

The proof requires a representation of the set of jumps of predictable finite-variation processes. The finite-variation property avoids the use of section theorems. Recall that a random set AΩ×[0,∞) is thin if A⊆⋃ n≥1τ n 〛 for a sequence of stopping times (τ n ) n≥1.

Lemma A.2

Let . Then the sets {XX } and {XX +} are thin, and {XX } is predictable.

Proof

For n,p≥0, define the stopping times S n,p by induction as S n,0=0 and

$$ S_{n,p+1}= \begin{cases} \inf\{t>S_{n,p}:|X_t-X_{S_{n,p}+}|>2^{-n}\}&\text{for even}\ p,\\[1mm] \inf\{t\ge S_{n,p}:|X_t-X_{S_{n,p}}|>2^{-n}\}&\text{for odd}\ p. \end{cases} $$

By definition, S n,p <S n,p+2. Define also the random sets

$$ A_{n,p}= \begin{cases} \{X_{S_{n,p}}\neq X_{S_{n,p}-}, S_{n,p}<\infty\}&\text{for even}\ p,\\[1mm] \{X_{S_{n,p}+}\neq X_{S_{n,p}}, S_{n,p}<\infty\}&\text{for odd}\ p, \end{cases} $$

and observe that . This is obvious for p even, while for p odd the right-continuity of the filtration implies that \(X_{S_{n,p}+}\) is -measurable, whence the claim. Then, set \(T_{n,p}={S_{n,p}}1_{A_{n,p}}+\infty1_{\varOmega\setminus A_{n,p}}\), which is a stopping time since . As X has finite variation, lim p→∞ S n,p =∞ and therefore

so that the sets {XX +} and {X X} are thin. Also, X is left-continuous, therefore the set {XX } is predictable. □

Proof of Proposition A.1

The proof of (i) is similar to the càdlàg case (Delbaen and Schachermayer [8], Theorem 12.2.1). By Lemma A.2, there is an exhausting sequence of stopping times (σ k ) k≥1 for the set of jumps of θ. For a fixed n≥1, denote by \(\varPi_{n}=(\tau_{k})_{0\le k\le N_{n}}\) the finite ordered sequence of stopping times obtained from the set \((k 2^{-n})_{0\le k\le2^{n} T}\cup(\sigma_{k})_{k\le n}\), and define the processes

$$ V_t(\theta,\varPi_n)=\sum_{k=1}^{N_n}|\theta_{\tau_{k}\wedge t}-\theta _{\tau_{k-1}\wedge t}|. $$
(A.1)

Then V(θ,Π n ) is predictable because θ is, and since V(θ,Π n ) converges to ∥θ∥ pointwise, also ∥θ∥ is predictable and (i) follows.

For (ii), denote by Π=(τ k )0≤kn a finite ordered sequence of stopping times, and define V(θ,Π) as in (A.1). The pointwise convergence of (θ n) n≥1 to θ implies that

$$ \lim_{n\rightarrow\infty}V_t \bigl(\theta^n,\varPi\bigr)=V_t(\theta,\varPi) \quad\text{for all}\ t,\varPi. $$

By (i) it follows that ∥θ t =sup Π V t (θ,Π), and therefore

$$ \liminf_{n\rightarrow\infty}\bigl\|\theta^n\bigr\|_t= \liminf_{n\rightarrow\infty}\sup_{\varPi} V_t\bigl(\theta^n, \varPi\bigr) \ge\sup_{\varPi}\lim_{n\rightarrow\infty}V_t\bigl( \theta^n,\varPi \bigr)=\sup_{\varPi} V_t(\theta, \varPi )=\|\theta\|_t, $$

which proves (ii). □

1.2 A.2 The predictable Stieltjes integral

This subsection defines the integral ∫S for each and .

Recall (cf. Rudin [30], Chap. 6) that the classical Stieltjes integral ∫fdg exists if and only if for all ε>0, there exists a δ>0 such that for any pair of partitions Γ,Γ′ with mesh smaller than δ, the corresponding Riemann sums differ by less than ε. Also (Rudin [30], Chap. 6, Exercise 3), if two functions f and g have a finite number of jump discontinuities, then the integrals ∫fdg and ∫gdf are defined if and only if at each (common) discontinuity point one function is right-continuous and the other is left-continuous.

Thus, because of overlapping discontinuities in θ and S, the integral ∫S may not exist in the standard Stieltjes sense. A simple example demonstrates the problem.

Example A.3

Consider the deterministic S=θ=1〚1,2〚. Then S is not Stieltjes-integrable with respect to θ, because there are arbitrarily fine subdivisions such that the corresponding Riemann sums are either 0 or 1. Indeed, some Riemann sums include the jump ΔS 1Δθ 1, while others do not.

However, ∫S is a Stieltjes integral if θ is left-continuous:

Lemma A.4

Let S be a càdlàg function and θ a function of finite variation. Then[0,T] S exists as a Stieltjes integral.

The proof requires a simple lemma.

Lemma A.5

Let S be a càdlàg (deterministic) function such thatS t |≤s for all t. Then for all η>s, there exists δ>0 such that |S u S t |≤η for all u,t such that |tu|≤δ.

Proof

By the càdlàg assumption, for all t∈[0,T] there exists some δ t >0 such that |S u S t |<(ηs)/2 for u∈[t,t+δ t ) and |S u S t|<(ηs)/2 for u∈(tδ t ,t). Therefore |S u S t |<η for all uU t =(tδ t ,t+δ t ). By compactness, the open covering of (U t ) t∈[0,T] admits a finite subcovering \((U_{t_{i}})_{i=1}^{n}\), and the assertion follows with \(\delta =\min_{1\le i\le n}\delta_{t_{i}}\). □

Proof of Lemma A.4

For a given ε>0, set \(J^{\varepsilon}_{t}=\sum_{s\le t}\Delta S_{s} 1_{\{ |\Delta S_{s}|>\varepsilon\}}\) and S ε=SJ ε. Since S is càdlàg, it has a finite number of jumps larger than any given size; therefore J ε is piecewise constant and right-continuous. By construction, S ε is càdlàg with jumps bounded by ε. Since S is càdlàg and θ is left-continuous, by the previous observation J ε is Stieltjes-integrable with respect to θ . For S ε, consider two partitions \(\varGamma=(t_{i})_{i=0}^{n},~\varGamma'=(s_{i})_{i=0}^{m}\), and write \(\varGamma\cup\varGamma'=(r_{i})_{i=0}^{l}\). Let u i ∈[t i−1,t i ], v i ∈[s i−1,s i ] be arbitrary points. The corresponding Riemann sums satisfy

where t(r j )=u i with i such that [r j−1,r j ]⊆[t i−1,t i ] and s(r j ) is defined analogously. If the meshes of Γ and Γ′ are smaller than δ, then |t(r)−s(r)|≤δ for all r. Applying Lemma A.5 with s=ε and η=2ε for some small δ, it follows that

$$ \bigl|I_T\bigl(S^\varepsilon,\theta_-;\varGamma\bigr)-I_T \bigl(S^\varepsilon,\theta_-;\varGamma'\bigr)\bigr|\le \sum _{r_i\le T} |S_{t(r_i)}-S_{s(r_{i-1})} | |\theta_{r_i-}- \theta_{r_{i-1}-} |\le2\varepsilon\|\theta\|_T . $$

Since J ε is Stieltjes-integrable, |I T (J ε,θ ;Γ)−I T (J ε,θ ;Γ′)|≤2εθ T up to a smaller δ′<δ. Adding up, for any ε>0 there is some δ′>0 such that

$$ \bigl|I_T(S,\theta_-;\varGamma)-I_T\bigl(S,\theta_-;\varGamma'\bigr)\bigr|\le 4\varepsilon\|\theta\|_T $$

for any pair of partitions Γ,Γ′ with mesh smaller than δ′, which amounts to pathwise integrability in the sense of Stieltjes. □

The predictable Stieltjes integral is defined as follows.

Definition A.6

For an arbitrary finite-variation θ and càdlàg S, define

$$ I_T(S,\theta):= \int_{[0,T]}S_u \,d\theta_u:= \int_{[0,T]} S\,d\theta _--\sum _{s\le T}(\theta_s-\theta_{s-})\Delta S_s. $$
(A.2)

The first term is a well-defined Stieltjes integral by Lemma A.4, and the second term is absolutely convergent,

$$\sum_{s\le T}\bigl|(\theta_s- \theta_{s-})\Delta S_s\bigr|\le\sum_{s\leq T} |\theta_s-\theta_{s-}\vert2 S^*_T \leq2\| \theta\|_T S^*_T<\infty, $$

since \(S_{T}^{*}=\sup_{t\in[0,T]} \vert S_{t}\vert\) is finite by the càdlàg assumption.

Remark A.7

The case for definition (A.2) rests on both mathematical and economic arguments. Proposition A.16 below justifies (A.2) from the viewpoint of stochastic analysis, connecting the predictable Stieltjes integral to the usual stochastic integral via integration by parts.

From the economic viewpoint, consider and , regarding S as the asset price and θ t (resp. θ t, θ t+) as the number of shares held at time t (resp. immediately before, after t). Intuitively, a jump of S at a predictable time τ corresponds to a shock whose size is unknown but whose timing is known before its occurrence (e.g. the announcement of macroeconomic data). Vice versa, a totally inaccessible τ corresponds to a shock with sudden timing (e.g. a natural catastrophe). The definition of ∫S should reflect this difference.

On a predictable jump, the investor may rebalance her portfolio “immediately” before τ (and change from position θ τ to θ τ ) and react after the price has changed from S τ to S τ (by moving from θ τ to θ τ+). On an unpredictable (i.e., τ totally inaccessible) jump, necessarily θ τ=θ τ , because θ is predictable (there is no possibility to prepare for τ), but there is still a possibility of rebalancing from θ τ to θ τ+. Thus, the cost of the portfolio θ should be

$$ S_{\tau-}(\theta_{\tau}-\theta_{\tau-})+S_{\tau}( \theta_{\tau +}-\theta_{\tau})= S_{\tau}(\theta_{\tau+}- \theta_{\tau-})-\Delta S_{\tau}(\theta_{\tau }- \theta_{\tau-}) $$
(A.3)

in the case of predictable τ and

$$ S_{\tau}(\theta_{\tau+}-\theta_{\tau})=S_{\tau}( \theta_{\tau +}-\theta_{\tau-}) $$
(A.4)

in the unpredictable case. These formulas are indeed consistent with (A.2).Footnote 3

Looking at (A.3) and (A.4) from a different angle, the predictability of θ dictates that ∫S should include ΔS τ (θ τ+θ τ ) at all predictable stopping times, and should not include it at totally inaccessible times.

The pathwise definition of the integral (A.2) extends to a linear map from processes into processes.

Proposition A.8

The function (S,θ)↦I (S,θ),

$$ I_T(S,\theta)=\int_{[0,T]} S\,d \theta_--\sum_{s\le T}(\theta_s- \theta_{s-})\Delta S_s $$
(A.5)

maps into .

Proof

It is necessary to check that ωI T (S,θ)(ω) is -measurable for all T>0, and that the paths tI t (S,θ) have right and left limits. For the latter property, observe that the integral in (A.5) is left-continuous with right limits, while the summation in (A.5) is right-continuous with left limits.

For the measurability part, note that the second term in (A.5) is trivially -measurable; hence it suffices to check the first term, which is a pathwise Stieltjes integral by Lemma A.4. Thus, it is the limit of Riemann sums along a deterministic grid,

$$ \int_{[0,T]}S\,d\theta_-= \lim_{n\rightarrow\infty}\sum _{k=0}^{\lfloor n T\rfloor} S_{\frac{k}{n}} \bigl(\theta_{{\frac{k+1}{n}}-}- \theta_{{\frac{k}{n}}-}\bigr), $$

and therefore it is -measurable. □

The following theorem establishes the main properties of the predictable Stieltjes integral. First, if θ is simple (i.e., constant on a sequence of intervals, some of which may collapse to points), then ∫S is consistent with the definition of cost process in the main text. Second, the integral satisfies the natural bound (A.7). Finally, the integral satisfies a dominated convergence theorem and a Fatou property with respect to pointwise convergence of the integrator θ.

Theorem A.9

The map I in (A.2) satisfies the following properties:

  1. (i)

    If (τ n ) n≥0 is a sequence of stopping times and

    is predictable, then

    $$ I_T(S,\theta)= \sum_{\tau_i\le T} S_{\tau_i-}(\theta_{\tau_i}-\theta_{\tau_i-}) +\sum _{\tau_i< T} S_{\tau _i}(\theta_{\tau_i+}- \theta_{\tau_i}). $$
    (A.6)
  2. (ii)

    I T is linear both in S and in θ, and with \(S^{*}_{T}=\sup _{t\in [0,T]}|S_{t}|\), we have

    $$ \bigl|I_T(S,\theta)\bigr|\le\|\theta\|_T S^*_T . $$
    (A.7)
  3. (iii)

    If θ nθ pointwise and sup n≥1θ n T <∞, then I(S,θ n)→I(S,θ) pointwise.

  4. (iv)

    If (θ n) n≥1 are as in (iii) and S≥0, then lim inf n→∞ I(S,∥θ n∥)≥I(S,∥θ∥) pointwise.

Proof

In (i), (A.6) follows immediately from (A.2). For (ii), the linearity in θ and S is immediate from (A.2), which also implies the estimate

(A.8)

Now, define the sequence of stopping times \((\sigma^{\varepsilon}_{n})_{n\ge0}\) as

$$ \sigma^\varepsilon_0=0 \quad\text{and}\quad \sigma^\varepsilon_{n+1}=\inf\bigl\{t>\sigma^\varepsilon_n: |S_t-S_{\sigma^\varepsilon_n}|>\varepsilon\bigr\} $$

and set , which is piecewise constant and right-continuous, and satisfies \(|S_{t}-S^{\varepsilon}_{t}|\le\varepsilon\) for all t∈[0,T]. By linearity in S and (A.8), for any ε>0, we have

$$\bigl|I_T(S,\theta)\bigr|\le \bigl|I_T\bigl(S^\varepsilon,\theta\bigr)\bigr|+3 \varepsilon\| \theta\|_T . $$

Fix T>0, and from now on assume θ 0=θ s =0 for sT without loss of generality. To calculate I T (S ε,θ), first observe that (with the convention σ −1=0)

and therefore

(A.9)

Consequently,

$$\bigl|I_T \bigl(S^\varepsilon,\theta\bigr)\bigr|=\biggl|\sum_{\sigma_n\le T}S_{\sigma _{n-1}}(\theta _{\sigma_n}-\theta_{\sigma_{n-1}})\biggr|\le S^*_T \|\theta\|_T $$

and (ii) follows since ε>0 is arbitrary.

(iii) By the linearity in S, it suffices to consider the case of θ n converging pointwise to zero. Property (ii) implies that

$$ \bigl|I_T\bigl(S,\theta^n\bigr)\bigr|\le \bigl|I_T\bigl(S^\varepsilon,\theta^n\bigr)\bigr|+\varepsilon\bigl\| \theta^n\bigr\|_T. $$

The calculations in the proof of (ii) show that the term |I T (S ε,θ n)| depends only on finitely many values of θ; thus it becomes arbitrarily small for large n. Since M:=sup n≥1θ n T is finite,

$$ \limsup_{n\to\infty} \bigl|I_T\bigl(S,\theta^{n}\bigr)\bigr|\le\varepsilon M $$

and convergence follows since ε is arbitrary.

(iv) By the same argument as in the proof of Proposition A.1, we have for s(ω)<t(ω) that

$$\liminf_{n\to\infty} \bigl(\bigl\|\theta^n\bigr\|_t-\bigl\|\theta^n\bigr\|_s\bigr)\geq \|\theta\|_t-\|\theta\|_s, $$

in the pointwise sense. Then, recalling (A.9), S≥0 implies that

As |I T (S,∥θ n∥)−I T (S ε,∥θ n∥)|≤εθ n T and |I T (S,∥θ∥)−I T (S ε,∥θ∥)|≤εθ T , it follows that

$$\varepsilon \Bigl(\sup_n\|\theta_n\|_T+\| \theta\|_T \Bigr)+ \liminf_{n\to \infty}\int_{[0,T]}S\, d\|\theta_n\|\geq \int_{[0,T]}S \,d\|\theta\|, $$

and the assertion follows as ε↓0. □

1.3 A.3 Approximations of predictable integrals

The previous subsection has shown that the integral θI T (S,θ) is continuous with respect to pointwise convergence. This result is probability-free.

A much more precise result holds in a filtered probability space satisfying the usual conditions, if S is locally bounded. Then, both θ and I T (S,θ) admit uniform approximations, almost surely, but the approximating sequence may depend on the probability measure. In this regard, approximating sequence are analogous to announcing sequences for predictable times.

Consider a filtered probability space .

Theorem A.10

Let and be locally bounded. Then, for all ε>0 there exist a strictly increasing sequence of stopping times (σ n ) n≥0 with sup n≥1 σ n >T and a predictable process θof the form

(A.10)

satisfying , |θ″−θ|≤ε, |∫S″−∫S|≤ε andθ″∥≤∥θpointwise on [0,T] (outside a P-zero set).

The proof of this theorem requires a property of predictable finite-variation processes.

Proposition A.11

Any is locally bounded.

Proof

Let τ n =inf{t>0:|θ t |>n}, so that lim n→∞ τ n =∞ a.s. because |θ|≤∥θ∥ is finite-valued. Then is bounded, i.e., |θ| is prelocally bounded. A prelocally bounded predictable process is also locally bounded by VIII.11 in Dellacherie and Meyer [10]. □

Proof of Theorem A.10

Fix T>0. Note that implies that ∥θ∥ is locally bounded by Proposition A.11, while S is locally bounded by assumption. Thus, there exists a sequence of stopping times (υ M ) M≥1,υ M T such that |S t |,∥θ t M for t∈〚0,υ M 〛, and it suffices to prove the claim on 〚0,υ M 〛. Indeed, if θ M satisfies the claim for ε M=ε2−(M+1), then satisfies the claim on [0,T]. Thus, from now on assume that . Define the sequence of stopping times \((\rho_{m})_{m=0}^{\infty}\) as

$$ \rho_0=0, \qquad\rho_{m+1}=\inf\bigl\{t>\rho_m: |S_t-S_{\rho_m}|\ge \delta\bigr\} \wedge T. $$

Since S is càdlàg, sup m ρ m >υ M a.s. on {υ M <T}.

Step 1: Construction of the approximation on 〛ρ m−1,ρ m 〛.

For m≥1, set \(\varepsilon_{m}=\frac{\varepsilon}{2M} 2^{-(m+1)}\), and define the stopping times (σ n ) n≥0 byFootnote 4

Since θ has finite variation, the random set ⋃ n≥0σ n 〛 has finite sections. Consider a set such that \(\tau _{n}:=(\sigma_{n})_{A_{n}}\) is the totally inaccessible part of σ n . The sequence (τ n ) n≥1 is strictly increasing because \((\sigma _{n})_{G_{n}}\) is predictable for the choice \(G_{n}:=\{\|\theta\|_{\sigma_{n}}\ge v_{n-1}+\varepsilon_{m}\} \), hence \(A_{n}\subseteq\{\|\theta\|_{\sigma_{n}}< v_{n-1}+\varepsilon _{m}, \| \theta\|_{\sigma_{n}+}\ge v_{n-1}+\varepsilon_{m}\}\).

The predictable set B=⋃ n≥0(〚σ n 〛∖〚τ n 〛) admits an exhausting sequence of predictable times (π n ) n≥0. Since B has finite sections, this sequence is strictly increasing up to ordering.Footnote 5 Consider first the process

which is predictable because each π n is predictable (cf. Jacod and Shiryaev [18], I.2.12). Then, set

which is again predictable, since 〚σ n 〛∖〚τ n 〛⊆⋃ k≥0π k 〛. Thus, θ′ constructs an approximation on the accessible part of 〚σ n 〛, and θ″ adjusts it on the totally inaccessible part of 〚σ n 〛. This two-step procedure is necessary, because the accessible part of 〚σ n 〛 is not identified by a single predictable time. θ″ satisfies, for all n,

(A.11)
(A.12)
(A.13)
(A.14)

Indeed, (A.11)–(A.13) hold by construction, and (A.14) holds because τ n is totally inaccessible and θ,θ″ are predictable. Finally, note that (A.12) implies that \(\theta_{\rho_{m}}=\theta ''_{\rho_{m}}\) on the accessible part of ρ m .

Step 2: Prove the equality

(A.15)

To see this equality, observe that

where the second equality exploits (A.11). Hence, we obtain the equality

as follows. First, recall that \(\theta''_{\sigma_{n}-}=\theta_{\sigma_{n-1}+}\). On 〚τ n 〛=〚σ n 〛∩A n the first term on the right-hand side vanishes by (A.14), and the claim follows. On 〚σ n 〛∖〚τ n 〛, the second term on the right-hand side vanishes by (A.12), while the first one reduces to \(S_{\sigma_{n}-}(\theta''_{\sigma_{n}-}-\theta_{\sigma_{n}-})\).

Step 3: Prove the estimate

(A.16)

First, rewrite (A.15) as

where . Define N m =max{n:σ n <ρ m+1}, which implies that \(\sigma_{N_{m}+1}=\rho _{m+1}\). By (A.15) it follows that

Now note that \(|S-S_{\rho_{m+1}-}|\le\delta\) and \(|S-S'_{\sigma _{n}}|\le \delta\) for nN m , since \(|S-S_{\rho_{m}}|\le\delta\) on 〚ρ m ,ρ m+1〚. Moreover, we also have \(\|\Delta S_{\rho _{m}}\|\le2 M\) because |S|≤M, and finally \(|\theta_{\rho _{m+1}-}-\theta_{\sigma_{N_{m}}+}|\le\varepsilon_{m}=\varepsilon 2^{-(m+1)}\), and so the claim follows.

Step 4: Setting δ=ε/M, and recalling that ∥θ∥ is bounded by M, it follows that

 □

The next corollary shows that the interpolation of θ not only approximates ∫S, but any finer Riemann sum as well.

Corollary A.12

Let θ,S,(σ n ) n≥0, and θbe as in Theorem A.10. Let \((\tilde{\sigma}_{n})_{n\ge0}\) be finer than (σ n ) n≥0 (i.e., for all n≥0, there exists \(\tilde{n}(n)\) such that \(\sigma_{n}=\sigma_{\tilde{n}(n)}\) a.s.), and define \(\tilde{\theta}''\) as in (A.10), with \(\tilde{\sigma}_{n}\) replacing σ n . Then we have \(|\theta''-\tilde{\theta}''|\le\varepsilon\), \(|\int S\, d\theta ''-\int S\,d\tilde{\theta}''|\le\varepsilon\) and \(\|\theta''\|\leq\|\tilde{\theta}''\|\) pointwise on [0,T].

Proof

\(\|\theta''\|\leq\|\tilde{\theta}''\|\) is clear. To see that \(|\theta ''-\tilde{\theta}''|\le\varepsilon\), note that \(\tilde{\theta}''_{\tilde{\sigma}_{\tilde{n}(n)}}=\theta''_{\sigma_{n}}=\theta_{\sigma_{n}}\), and recall that ∥θ∥ increases by less than ε m on , whence \(|\theta''-\tilde{\theta}''|\le|\theta''-\theta|+|\theta -\tilde{\theta}''|\le2\varepsilon_{m}<\varepsilon\).

To see that \(|\int S \,d\theta''-\int S\,d\tilde{\theta}''|\le\varepsilon \), observe that the equality (A.15) and the inequality (A.16) continue to hold with \(\tilde{\theta}''\) instead of θ, again because \(\theta,\theta'',\tilde{\theta}''\) coincide on 〚σ n 〛. It follows that also the estimate in Step 4 above continues to hold, whence the claim. □

The next corollary is a straightforward consequence of Theorem A.10.

Corollary A.13

Let and assume that are locally bounded. Then for all ε>0, there exists θas in Proposition A.10, which satisfies |θ″−θ|≤ε, |∫S″−∫S|+|∫κdθ″∥−∫κdθ∥|≤ε, andθ″∥≤∥θpointwise on [0,T].

Repeating the above proof in a deterministic setting yields the following deterministic statement.Footnote 6

Corollary A.14

Let θ be a finite-variation function, and S a càdlàg function. Then for all ε>0, there exists a sequence of time instants (τ n ) n≥0 such that the integrand θdefined in (A.10) satisfies |θ″−θ|≤ε, |∫S″−∫S|≤ε andθ″∥≤∥θpointwise.

A simple consequence is that the integral I in (A.2) is the unique extension of the simple integrals (A.6) which satisfies a Lebesgue-type theorem with respect to pointwise convergence of the integrator.

Proposition A.15

The integral (A.2) is the unique extension of (A.6) to the class of all finite-variation functions θ which satisfies (iii) in Theorem A.9.

Proof

Take θ n given by Corollary A.14 with the choice ε=1/n. Let \(\tilde{I}\) be an arbitrary extension of the restriction of I to simple θ which satisfies (iii) of Theorem A.9. Then necessarily \(\tilde{I}(S,\theta)=\lim_{n\to\infty} I(S,\theta_{n})\), and this latter limit is I(S,θ) by Corollary A.14. □

When S is a semimartingale, integration by parts links the predictable Stieltjes integral to the usual stochastic integral.

Proposition A.16

Let and S a càdlàg semimartingale. Then

$$ \int_{0}^T S_u\,d\theta_u=\theta_T S_T-\theta_0 S_0-\int_{0}^T \theta_u\, dS_u, $$

where the first integral is in the predictable Stieltjes sense and the second one is a usual stochastic integral.

Proof

Without loss of generality, assume θ 0=θ T =0. Then the stochastic integral exists, as θ is locally bounded by Proposition A.11. Consider first simple θ. By linearity of the integrals, it suffices to treat the left- and right-continuous cases separately, and localization allows to assume ∥θ∥,θ bounded.

First, consider θ left-continuous with finitely many jumps (τ j )0≤jn . Then

by the definition of (elementary) stochastic integrals, with the convention τ −1=0. This argument shows the statement for θ, and extends easily to arbitrary simple left-continuous θ.

Take now θ right-continuous. Without loss of generality, suppose (see the proof of Theorem A.10) that the jump times τ j are predictable with announcing sequences \(\tau_{j-1}\leq\tau_{j}^{k}< \tau_{j}\), k≥1. Set

Note that \(\theta_{\tau_{j+1}}\) is -measurable (Jacod and Shiryaev [18], I.2.4); hence the conditional expectation converges a.s. to \(\theta_{\tau_{j+1}}\) as k→∞ by the martingale convergence theorem. Hence γ(k)→θ pointwise (possibly outside a P-zero set).

It follows from the previous step, by the dominated convergence theorem for stochastic integrals, and by Theorem A.9(iii) that

$$ -\int_0^T \theta_u \,dS_u=\lim_{k\to\infty}-\int_0^T \gamma _u(k)\,dS_u=\lim _{k\to\infty} I(S,\gamma(k))= I(S,\theta). $$

For general , take an approximation θ n constructed in Theorem A.10 with ε:=1/n. The above arguments imply that \(I_{T}(S,\theta^{n})=-\int_{0}^{T} \theta^{n}_{u} \,dS_{u}\) for all n. Moreover, I T (S,θ n)→I T (S,θ) by Theorem A.10 and the stochastic integrals of θ n converge to that of θ by dominated convergence (note that |θ n |≤|θ|+1/n). □

Appendix B

For completeness, this appendix recalls the statements of some now classical tools in mathematical finance. First, a compactness lemma for bounded sets in \(L^{0}_{+}\):

Lemma B.1

(Delbaen and Schachermayer [6], Lemma A1.1)

Let (f n ) n≥1 be a sequence of [0,∞)-valued measurable functions on a probability space . There exists a sequence g n conv((f k ) kn ) such that (g n ) n≥1 converges almost surely to a [0,∞]-valued function g. If conv((f n ) n≥1) is bounded in L 0, then g is finite almost surely.

Second, an important consequence of the Krein–Smulian theorem (cf. Delbaen and Schachermayer [6], Theorem 2.1; Kabanov and Last [21], Lemma 4.4):

Lemma B.2

Let CL be a convex set. Then C is σ(L ,L 1)-closed if and only if C∩{Z:∥Zx} is closed in probability for all x>0.

Third, a version of the Kreps [26]–Yan [33] separation theorem (cf. Schachermayer [31] or Kabanov and Stricker [23]):

Theorem B.3

Let \(-L^{\infty}_{+}\subseteq C\subseteq L^{\infty}\) be a convex cone, closed in the σ(L ,L 1)-topology, such that \(C\cap L^{\infty}_{+}=\{ 0\}\). Then there exists a probability Q equivalent to P such that E Q [C]≤0.

Finally, a compactness lemma for finite-variation processes (cf. Guasoni [14], Lemma 3.4; Campi and Schachermayer [2], Proposition 14). Note that Lemma A.2 allows to avoid section theorems.

Lemma B.4

Consider a sequence such that (∥θ n T ) n≥1 is bounded in L 0. Then there exist and a sequence of convex combinations η nconv(θ n,θ n+1,…) converging to θ pointwise such that alsoη n ∥→∥ηpointwise.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Guasoni, P., Lépinette, E. & Rásonyi, M. The fundamental theorem of asset pricing under transaction costs. Finance Stoch 16, 741–777 (2012). https://doi.org/10.1007/s00780-012-0185-0

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00780-012-0185-0

Keywords

Mathematics Subject Classification

JEL Classification

Navigation