
AMS eBook CollectionsOne of the world's most respected mathematical collections, available in digital format for your library or institution
A Proof that Artificial Neural Networks Overcome the Curse of Dimensionality in the Numerical Approximation of Black–Scholes Partial Differential Equations
About this Title
Philipp Grohs, Fabian Hornung, Arnulf Jentzen and Philippe von Wurstemberger
Publication: Memoirs of the American Mathematical Society
Publication Year:
2023; Volume 284, Number 1410
ISBNs: 978-1-4704-5632-0 (print); 978-1-4704-7448-5 (online)
DOI: https://doi.org/10.1090/memo/1410
Published electronically: March 21, 2023
Table of Contents
Chapters
- 1. Introduction
- 2. Probabilistic and analytic preliminaries
- 3. Artificial neural network approximations
- 4. Artificial neural network approximations for Black-Scholes partial differential equations
Abstract
Artificial neural networks (ANNs) have very successfully been used in numerical simulations for a series of computational problems ranging from image classification/image recognition, speech recognition, time series analysis, game intelligence, and computational advertising to numerical approximations of partial differential equations (PDEs). Such numerical simulations suggest that ANNs have the capacity to very efficiently approximate high-dimensional functions and, especially, indicate that ANNs seem to admit the fundamental power to overcome the curse of dimensionality when approximating the high-dimensional functions appearing in the above named computational problems. There are a series of rigorous mathematical approximation results for ANNs in the scientific literature. Some of them prove convergence without convergence rates and some of these mathematical results even rigorously establish convergence rates but there are only a few special cases where mathematical results can rigorously explain the empirical success of ANNs when approximating high-dimensional functions. The key contribution of this article is to disclose that ANNs can efficiently approximate high-dimensional functions in the case of numerical approximations of Black-Scholes PDEs. More precisely, this work reveals that the number of required parameters of an ANN to approximate the solution of the Black-Scholes PDE grows at most polynomially in both the reciprocal of the prescribed approximation accuracy $\varepsilon > 0$ and the PDE dimension $d \in \mathbb {N}$. We thereby prove, for the first time, that ANNs do indeed overcome the curse of dimensionality in the numerical approximation of Black-Scholes PDEs.- Adam Andersson, Arnulf Jentzen, and Ryan Kurniawan, Existence, uniqueness, and regularity for stochastic evolution equations with irregular initial values, J. Math. Anal. Appl. 495 (2021), no. 1, Paper No. 124558, 33. MR 4172830, DOI 10.1016/j.jmaa.2020.124558
- Francis Bach, Breaking the curse of dimensionality with convex neutral networks, J. Mach. Learn. Res. 18 (2017), Paper No. 19, 53. MR 3634886
- Andrew R. Barron, Universal approximation bounds for superpositions of a sigmoidal function, IEEE Trans. Inform. Theory 39 (1993), no. 3, 930–945. MR 1237720, DOI 10.1109/18.256500
- A. R. Barron, Approximation and estimation bounds for artificial neural networks, Mach. Learn. 14, 1 (1994), 115–133.
- C. Beck, S. Becker, P. Grohs, N. Jaafari, and A. Jentzen, Solving stochastic differential equations and Kolmogorov equations by means of deep learning. arXiv:1806.00421 (2018), 56 pages.
- Christian Beck, Sebastian Becker, Philipp Grohs, Nor Jaafari, and Arnulf Jentzen, Solving the Kolmogorov PDE by means of deep learning, J. Sci. Comput. 88 (2021), no. 3, Paper No. 73, 28. MR 4293960, DOI 10.1007/s10915-021-01590-0
- Christian Beck, Weinan E, and Arnulf Jentzen, Machine learning approximation algorithms for high-dimensional fully nonlinear partial differential equations and second-order backward stochastic differential equations, J. Nonlinear Sci. 29 (2019), no. 4, 1563–1619. MR 3993178, DOI 10.1007/s00332-018-9525-3
- Sebastian Becker, Patrick Cheridito, and Arnulf Jentzen, Deep optimal stopping, J. Mach. Learn. Res. 20 (2019), Paper No. 74, 25. MR 3960928
- Richard Bellman, Dynamic programming, Princeton Landmarks in Mathematics, Princeton University Press, Princeton, NJ, 2010. Reprint of the 1957 edition; With a new introduction by Stuart Dreyfus. MR 2641641
- E. K. Blum and L. K. Li, Approximation theory and feedforward networks, Neural networks 4, 4 (1991), 511–515.
- Helmut Bölcskei, Philipp Grohs, Gitta Kutyniok, and Philipp Petersen, Optimal approximation with sparsely connected deep neural networks, SIAM J. Math. Data Sci. 1 (2019), no. 1, 8–45. MR 3949699, DOI 10.1137/18M118709X
- Martin Burger and Andreas Neubauer, Error bounds for approximation with neural networks, J. Approx. Theory 112 (2001), no. 2, 235–250. MR 1864812, DOI 10.1006/jath.2001.3613
- Emmanuel Jean Candes, Ridgelets: Theory and applications, ProQuest LLC, Ann Arbor, MI, 1998. Thesis (Ph.D.)–Stanford University. MR 2698286
- Jean-François Chassagneux and Dan Crisan, Runge-Kutta schemes for backward stochastic differential equations, Ann. Appl. Probab. 24 (2014), no. 2, 679–720. MR 3178495, DOI 10.1214/13-AAP933
- T. Chen and H. Chen, Approximation capability to functions of several variables, nonlinear functionals, and operators by radial basis function neural networks, IEEE Transactions on Neural Networks 6, 4 (1995), 904–910.
- Abdellah Chkifa, Albert Cohen, and Christoph Schwab, Breaking the curse of dimensionality in sparse polynomial approximation of parametric PDEs, J. Math. Pures Appl. (9) 103 (2015), no. 2, 400–428 (English, with English and French summaries). MR 3298364, DOI 10.1016/j.matpur.2014.04.009
- C. K. Chui, Xin Li, and H. N. Mhaskar, Neural networks for localized approximation, Math. Comp. 63 (1994), no. 208, 607–623. MR 1240656, DOI 10.1090/S0025-5718-1994-1240656-2
- Albert Cohen and Ronald DeVore, Approximation of high-dimensional parametric PDEs, Acta Numer. 24 (2015), 1–159. MR 3349307, DOI 10.1017/S0962492915000033
- S. Cox, M. Hutzenthaler, and A. Jentzen, Local Lipschitz continuity in the initial value and strong completeness for nonlinear stochastic differential equations, arXiv:1309.5595 (2013), 84 pages.
- Sonja Cox, Martin Hutzenthaler, Arnulf Jentzen, Jan van Neerven, and Timo Welti, Convergence in Hölder norms with applications to Monte Carlo methods in infinite dimensions, IMA J. Numer. Anal. 41 (2021), no. 1, 493–548. MR 4205064, DOI 10.1093/imanum/drz063
- Michael G. Crandall, Hitoshi Ishii, and Pierre-Louis Lions, User’s guide to viscosity solutions of second order partial differential equations, Bull. Amer. Math. Soc. (N.S.) 27 (1992), no. 1, 1–67. MR 1118699, DOI 10.1090/S0273-0979-1992-00266-5
- Michael G. Crandall and Pierre-Louis Lions, Viscosity solutions of Hamilton-Jacobi equations, Trans. Amer. Math. Soc. 277 (1983), no. 1, 1–42. MR 690039, DOI 10.1090/S0002-9947-1983-0690039-8
- Jakob Creutzig, Steffen Dereich, Thomas Müller-Gronbach, and Klaus Ritter, Infinite-dimensional quadrature and approximation of distributions, Found. Comput. Math. 9 (2009), no. 4, 391–429. MR 2519865, DOI 10.1007/s10208-008-9029-x
- G. Cybenko, Approximation by superpositions of a sigmoidal function, Math. Control Signals Systems 2 (1989), no. 4, 303–314. MR 1015670, DOI 10.1007/BF02551274
- Giuseppe Da Prato and Jerzy Zabczyk, Stochastic equations in infinite dimensions, Encyclopedia of Mathematics and its Applications, vol. 44, Cambridge University Press, Cambridge, 1992. MR 1207136, DOI 10.1017/CBO9780511666223
- G. E. Dahl, D. Yu, L. Deng, and A. Acero, Context-dependent pre-trained deep neural networks for large-vocabulary speech recognition, IEEE Transactions on audio, speech, and language processing 20, 1 (2012), 30–42.
- Ronald A. DeVore, Konstantin I. Oskolkov, and Pencho P. Petrushev, Approximation by feed-forward neural networks, Ann. Numer. Math. 4 (1997), no. 1-4, 261–287. The heritage of P. L. Chebyshev: a Festschrift in honor of the 70th birthday of T. J. Rivlin. MR 1422683
- Weinan E, Jiequn Han, and Arnulf Jentzen, Deep learning-based numerical methods for high-dimensional parabolic partial differential equations and backward stochastic differential equations, Commun. Math. Stat. 5 (2017), no. 4, 349–380. MR 3736669, DOI 10.1007/s40304-017-0117-6
- Weinan E, Martin Hutzenthaler, Arnulf Jentzen, and Thomas Kruse, Multilevel Picard iterations for solving smooth semilinear parabolic heat equations, Partial Differ. Equ. Appl. 2 (2021), no. 6, Paper No. 80, 31. MR 4338044, DOI 10.1007/s42985-021-00089-5
- Weinan E and Bing Yu, The deep Ritz method: a deep learning-based numerical algorithm for solving variational problems, Commun. Math. Stat. 6 (2018), no. 1, 1–12. MR 3767958, DOI 10.1007/s40304-018-0127-z
- Dennis Elbrächter, Philipp Grohs, Arnulf Jentzen, and Christoph Schwab, DNN expression rate analysis of high-dimensional PDEs: application to option pricing, Constr. Approx. 55 (2022), no. 1, 3–71. MR 4376559, DOI 10.1007/s00365-021-09541-6
- R. Eldan and O. Shamir, The power of depth for feedforward neural networks, Proceedings of the 29th Conference on Learning Theory, COLT 2016, New York, USA (2016), 907–940.
- S. W. Ellacott, Aspects of the numerical analysis of neural networks, Acta numerica, 1994, Acta Numer., Cambridge Univ. Press, Cambridge, 1994, pp. 145–202. MR 1288097, DOI 10.1017/S0962492900002439
- M. Fujii, A. Takahashi, and M. Takahashi, Asymptotic Expansion as Prior Knowledge in Deep Learning Method for high dimensional BSDEs, arXiv:1710.07030 (2017), 16 pages.
- K.-I. Funahashi, On the approximate realization of continuous mappings by neural networks, Neural Networks 2, 3 (1989), 183–192.
- D. J. H. Garling, Inequalities: a journey into linear analysis, Cambridge University Press, Cambridge, 2007. MR 2343341, DOI 10.1017/CBO9780511755217
- S. Geiss and J. Ylinen, Decoupling on the Wiener space, related Besov spaces, and applications to BSDEs, arXiv:1409.5322 (2018), 112 pages.
- Michael B. Giles, Multilevel Monte Carlo path simulation, Oper. Res. 56 (2008), no. 3, 607–617. MR 2436856, DOI 10.1287/opre.1070.0496
- Vincenzo Dimonte, Set Theory: Exploring Independence and Truth [book review of MR3243739], Studia Logica 106 (2018), no. 2, 449–452. MR 3777450, DOI 10.1007/s11225-018-9793-9
- Emmanuel Gobet, Jean-Philippe Lemor, and Xavier Warin, A regression-based Monte Carlo method to solve backward stochastic differential equations, Ann. Appl. Probab. 15 (2005), no. 3, 2172–2202. MR 2152657, DOI 10.1214/105051605000000412
- E. Gobet, J. G. López-Salas, P. Turkedjiev, and C. Vázquez, Stratified regression Monte-Carlo scheme for semilinear PDEs and BSDEs with large scale parallelization on GPUs, SIAM J. Sci. Comput. 38 (2016), no. 6, C652–C677. MR 3573315, DOI 10.1137/16M106371X
- Ian Goodfellow, Yoshua Bengio, and Aaron Courville, Deep learning, Adaptive Computation and Machine Learning, MIT Press, Cambridge, MA, 2016. MR 3617773
- Carl Graham and Denis Talay, Stochastic simulation and Monte Carlo methods, Stochastic Modelling and Applied Probability, vol. 68, Springer, Heidelberg, 2013. Mathematical foundations of stochastic simulation. MR 3097957, DOI 10.1007/978-3-642-39363-1
- A. Graves, A.-r. Mohamed, and G. Hinton, Speech recognition with deep recurrent neural networks, Acoustics, speech and signal processing (icassp), 2013 ieee international conference on (2013), IEEE, pp. 6645–6649.
- Philipp Grohs, Fabian Hornung, Arnulf Jentzen, and Philipp Zimmermann, Space-time error estimates for deep neural network approximations for differential equations, Adv. Comput. Math. 49 (2023), no. 1, Paper No. 4, 78. MR 4534487, DOI 10.1007/s10444-022-09970-2
- Philipp Grohs, Arnulf Jentzen, and Diyora Salimova, Deep neural network approximations for solutions of PDEs based on Monte Carlo algorithms, Partial Differ. Equ. Appl. 3 (2022), no. 4, Paper No. 45, 41. MR 4437476, DOI 10.1007/s42985-021-00100-z
- Martin Hairer, Martin Hutzenthaler, and Arnulf Jentzen, Loss of regularity for Kolmogorov equations, Ann. Probab. 43 (2015), no. 2, 468–527. MR 3305998, DOI 10.1214/13-AOP838
- Jiequn Han, Arnulf Jentzen, and Weinan E, Solving high-dimensional partial differential equations using deep learning, Proc. Natl. Acad. Sci. USA 115 (2018), no. 34, 8505–8510. MR 3847747, DOI 10.1073/pnas.1718942115
- E. J. Hartman, J. D. Keeler, and J. M. Kowalski, Layered neural networks with gaussian hidden units as universal approximations, Neural computation 2, 2 (1990), 210–215.
- S. Heinrich, Monte Carlo complexity of global solution of integral equations, J. Complexity 14 (1998), no. 2, 151–175. MR 1629093, DOI 10.1006/jcom.1998.0471
- S. Heinrich, Multilevel monte carlo methods, Large-Scale Scientific Computing (Berlin, Heidelberg, 2001), S. Margenov, J. Waśniewski, and P. Yalamov, Eds., Springer Berlin Heidelberg, pp. 58–67.
- Daniel Henry, Geometric theory of semilinear parabolic equations, Lecture Notes in Mathematics, vol. 840, Springer-Verlag, Berlin-New York, 1981. MR 610244
- P. Henry-Labordere, Deep Primal-Dual Algorithm for BSDEs: Applications of Machine Learning to CVA and IM. SSRN Electronic Journal (2017).
- G. Hinton, L. Deng, D. Yu, G. E. Dahl, A.-r. Mohamed, N. Jaitly, A. Senior, V. Vanhoucke, P. Nguyen, T. N. Sainath, et al, Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups, IEEE Signal processing magazine 29, 6 (2012), 82–97.
- K. Hornik, Approximation capabilities of multilayer feedforward networks, Neural Networks 4, 2 (1991), 251 – 257.
- K. Hornik, Some new results on neural network approximation, Neural networks 6, 8 (1993), 1069–1072.
- K. Hornik, M. Stinchcombe, and H. White, Multilayer feedforward networks are universal approximators, Neural Networks 2, 5 (1989), 359–366.
- Halbert White, Artificial neural networks, Blackwell Publishers, Oxford, 1992. Approximation and learning theory; With contributions by A. R. Gallant, K. Hornik, M. Stinchcombe and J. Wooldridge. MR 1203316
- G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger, Densely connected convolutional networks, CVPR (2017), vol. 1, p. 3.
- Martin Hutzenthaler, Arnulf Jentzen, Thomas Kruse, and Tuan Anh Nguyen, A proof that rectified deep neural networks overcome the curse of dimensionality in the numerical approximation of semilinear heat equations, Partial Differ. Equ. Appl. 1 (2020), no. 2, Paper No. 10, 34. MR 4292849, DOI 10.1007/s42985-019-0006-9
- Martin Hutzenthaler, Arnulf Jentzen, Thomas Kruse, Tuan Anh Nguyen, and Philippe von Wurstemberger, Overcoming the curse of dimensionality in the numerical approximation of semilinear parabolic partial differential equations, Proc. A. 476 (2020), no. 2244, 20190630, 25. MR 4203091, DOI 10.1098/rspa.2019.0630
- T. Hytönen, J. van Neerven, M. Veraar, and L. Weis, Analysis in Banach spaces. Vol. II, vol. 67 of Ergebnisse der Mathematik und ihrer Grenzgebiete. 3. Folge. A Series of Modern Surveys in Mathematics [Results in Mathematics and Related Areas. 3rd Series. A Series of Modern Surveys in Mathematics]. Springer, Cham, 2017. Probabilistic methods and operator theory.
- Arnulf Jentzen and Peter E. Kloeden, Taylor approximations for stochastic partial differential equations, CBMS-NSF Regional Conference Series in Applied Mathematics, vol. 83, Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA, 2011. MR 2856611, DOI 10.1137/1.9781611972016
- Arnulf Jentzen, Diyora Salimova, and Timo Welti, A proof that deep artificial neural networks overcome the curse of dimensionality in the numerical approximation of Kolmogorov partial differential equations with constant diffusion and nonlinear drift coefficients, Commun. Math. Sci. 19 (2021), no. 5, 1167–1205. MR 4283528, DOI 10.4310/CMS.2021.v19.n5.a1
- Olav Kallenberg, Foundations of modern probability, Probability and its Applications (New York), Springer-Verlag, New York, 1997. MR 1464694
- Yuehaw Khoo, Jianfeng Lu, and Lexing Ying, Solving parametric PDE problems with artificial neural networks, European J. Appl. Math. 32 (2021), no. 3, 421–435. MR 4253972, DOI 10.1017/S0956792520000182
- Achim Klenke, Probability theory, Translation from the German edition, Universitext, Springer, London, 2014. A comprehensive course. MR 3112259, DOI 10.1007/978-1-4471-5361-0
- A. Krizhevsky, I. Sutskever, and G. E. Hinton, Imagenet classification with deep convolutional neural networks, Advances in neural information processing systems (2012), pp. 1097–1105.
- G. Kutyniok, P. Petersen, M. Raslan, and R. Schneider, A Theoretical Analysis of Deep Neural Networks and Parametric PDEs, arXiv:1904.00377 (2019).
- I. E. Lagaris, A. Likas, and D. I. Fotiadis, Artificial neural networks for solving ordinary and partial differential equations, IEEE transactions on neural networks 9, 5 (1998), 987–1000.
- Y. LeCun, Y. Bengio, and G. Hinton, Deep learning, nature 521, 7553 (2015), 436.
- M. Leshno, V. Y. Lin, A. Pinkus, and S. Schocken, Multilayer feedforward networks with a nonpolynomial activation function can approximate any function, Neural networks 6, 6 (1993), 861–867.
- Warren S. McCulloch and Walter Pitts, A logical calculus of the ideas immanent in nervous activity, Bull. Math. Biophys. 5 (1943), 115–133. MR 10388, DOI 10.1007/bf02478259
- H. N. Mhaskar and Charles A. Micchelli, Degree of approximation by neural and translation networks with a single hidden layer, Adv. in Appl. Math. 16 (1995), no. 2, 151–183. MR 1334147, DOI 10.1006/aama.1995.1008
- H. N. Mhaskar, Neural networks for optimal approximation of smooth and analytic functions, Neural Comput. 8, 1 (1996), 164–177.
- H. N. Mhaskar and T. Poggio, Deep vs. shallow networks: an approximation theory perspective, Anal. Appl. (Singap.) 14 (2016), no. 6, 829–848. MR 3564936, DOI 10.1142/S0219530516400042
- Siddhartha Mishra, A machine learning framework for data driven acceleration of computations of differential equations, Math. Eng. 1 (2019), no. 1, 118–146. MR 4135071, DOI 10.3934/Mine.2018.1.118
- Katrin Mittmann and Ingo Steinwart, On the existence of continuous modifications of vector-valued random fields, Georgian Math. J. 10 (2003), no. 2, 311–317. Dedicated to the memory of Professor Revaz Chitashvili. MR 2009978
- M. A. Nabian and H. Meidani, A Deep Neural Network Surrogate for High-Dimensional Random Partial Differential Equations, arXiv:1806.02957 (2018), 23 pages.
- T. Nguyen-Thien and T. Tran-Cong, Approximation of functions and their derivatives: A neural network implementation with applications, Appl. Math. Model. 23, 9 (1999), 687–704.
- É. Pardoux and S. Peng, Backward stochastic differential equations and quasilinear parabolic partial differential equations, Stochastic partial differential equations and their applications (Charlotte, NC, 1991) Lect. Notes Control Inf. Sci., vol. 176, Springer, Berlin, 1992, pp. 200–217. MR 1176785, DOI 10.1007/BFb0007334
- J. Park and I. W. Sandberg, Universal approximation using radial-basis-function networks, Neural computation 3, 2 (1991), 246–257.
- D. Perekrestenko, P. Grohs, D. Elbrächter, and H. Bölcskei, The universal approximation power of finite-width deep ReLU networks, arXiv:1806.01528 (2018), 16 pages.
- Philipp Petersen, Mones Raslan, and Felix Voigtlaender, Topological properties of the set of functions generated by neural networks of fixed size, Found. Comput. Math. 21 (2021), no. 2, 375–444. MR 4243432, DOI 10.1007/s10208-020-09461-0
- P. Petersen and F. Voigtlaender, Optimal approximation of piecewise smooth functions using deep ReLU neural networks, arXiv:1709.05289 (2017), 54 pages.
- Allan Pinkus, Approximation theory of the MLP model in neural networks, Acta numerica, 1999, Acta Numer., vol. 8, Cambridge Univ. Press, Cambridge, 1999, pp. 143–195. MR 1819645, DOI 10.1017/S0962492900002919
- K. L. Priddy and P. E. Keller, Artificial neural networks: an introduction, vol. 68. SPIE press, 2005.
- M. Raissi, Forward-Backward Stochastic Neural Networks: Deep Learning of High-dimensional Partial Differential Equations, arXiv:1804.07010 (2018), 17 pages.
- Christoph Reisinger and Yufei Zhang, Rectified deep neural networks overcome the curse of dimensionality for nonsmooth value functions in zero-sum games of nonlinear stiff systems, Anal. Appl. (Singap.) 18 (2020), no. 6, 951–999. MR 4154658, DOI 10.1142/S0219530520500116
- Daniel Revuz and Marc Yor, Continuous martingales and Brownian motion, 3rd ed., Grundlehren der mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences], vol. 293, Springer-Verlag, Berlin, 1999. MR 1725357, DOI 10.1007/978-3-662-06400-9
- J. Schmidhuber, Deep learning in neural networks: An overview, Neural networks 61 (2015), 85–117.
- M. Schmitt, Lower bounds on the complexity of approximating continuous functions by sigmoidal neural networks, Proceedings of the 12th International Conference on Neural Information Processing Systems (Cambridge, MA, USA, 1999), NIPS’99, MIT Press, pp. 328–334.
- Uri Shaham, Alexander Cloninger, and Ronald R. Coifman, Provable approximation properties for deep neural networks, Appl. Comput. Harmon. Anal. 44 (2018), no. 3, 537–557. MR 3768851, DOI 10.1016/j.acha.2016.04.003
- D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. Van Den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot, et al, Mastering the game of go with deep neural networks and tree search, nature 529, 7587 (2016), 484.
- David Silver, Thomas Hubert, Julian Schrittwieser et al., A general reinforcement learning algorithm that masters chess, shogi, and Go through self-play, Science 362 (2018), no. 6419, 1140–1144. MR 3888768, DOI 10.1126/science.aar6404
- K. Simonyan and A. Zisserman, Very deep convolutional networks for large-scale image recognition, arXiv:1409.1556 (2014), 14 pages.
- Justin Sirignano, Deep learning [book review of MR3617773], SIAM Rev. 60 (2018), no. 3, 771–773. MR 3841485, DOI 10.1137/0805005
- Tobias von Petersdorff and Christoph Schwab, Numerical solution of parabolic equations in high dimensions, M2AN Math. Model. Numer. Anal. 38 (2004), no. 1, 93–127. MR 2073932, DOI 10.1051/m2an:2004005
- X. Warin, Monte Carlo for high-dimensional degenerated Semi Linear and Full Non Linear PDEs, arXiv:1805.05078 (2018), 23 pages.
- Xavier Warin, Nesting Monte Carlo for high-dimensional non-linear PDEs, Monte Carlo Methods Appl. 24 (2018), no. 4, 225–247. MR 3891748, DOI 10.1515/mcma-2018-2020
- C. Wu, P. Karanasou, M. J. Gales, and K. C. Sim, Stimulated deep neural network for speech recognition, Tech. rep., University of Cambridge Cambridge, 2016.
- D. Yarotsky, Error bounds for approximations with deep ReLU networks, Neural Networks 94 (2017), 103–114.
- D. Yarotsky, Universal approximations of invariant maps by neural networks, arXiv:1804.10306 (2018), 64 pages.