Remote Access Mathematics of Computation
Green Open Access

Mathematics of Computation

ISSN 1088-6842(online) ISSN 0025-5718(print)

Request Permissions   Purchase Content 
 
 

 

Compressive sensing Petrov-Galerkin approximation of high-dimensional parametric operator equations


Authors: Holger Rauhut and Christoph Schwab
Journal: Math. Comp. 86 (2017), 661-700
MSC (2010): Primary 35B30; Secondary 65N30, 41A58, 94A20
DOI: https://doi.org/10.1090/mcom/3113
Published electronically: May 17, 2016
MathSciNet review: 3584544
Full-text PDF

Abstract | References | Similar Articles | Additional Information

Abstract: We analyze the convergence of compressive sensing based sampling techniques for the efficient evaluation of functionals of solutions for a class of high-dimensional, affine-parametric, linear operator equations which depend on possibly infinitely many parameters. The proposed algorithms are based on so-called ``non-intrusive'' sampling of the high-dimensional parameter space, reminiscent of Monte Carlo sampling. In contrast to Monte Carlo, however, a functional of the parametric solution is then computed via compressive sensing methods from samples of functionals of the solution. A key ingredient in our analysis of independent interest consists of a generalization of recent results on the approximate sparsity of generalized polynomial chaos representations (gpc) of the parametric solution families in terms of the gpc series with respect to tensorized Chebyshev polynomials. In particular, we establish sufficient conditions on the parametric inputs to the parametric operator equation such that the Chebyshev coefficients of gpc expansion are contained in certain weighted $ \ell _p$-spaces for $ 0<p\leq 1$. Based on this we show that reconstructions of the parametric solutions computed from the sampled problems converge, with high probability, at the $ L_2$, resp.$ L_\infty $, convergence rates afforded by best $ s$-term approximations of the parametric solution up to logarithmic factors.


References [Enhancements On Off] (What's this?)

  • [1] Martin Anthony and Peter L. Bartlett, Neural network learning: theoretical foundations, Cambridge University Press, Cambridge, 1999. MR 1741038
  • [2] B. Azarkhalili, A compressive sampling approach to adaptive multi-resolution approximation of differential equations with random inputs, preprint arXiv:1307.0483 (2013).
  • [3] Peter Binev, Albert Cohen, Wolfgang Dahmen, Ronald DeVore, Guergana Petrova, and Przemyslaw Wojtaszczyk, Convergence rates for greedy algorithms in reduced basis methods, SIAM J. Math. Anal. 43 (2011), no. 3, 1457-1472. MR 2821591 (2012m:41018), https://doi.org/10.1137/100795772
  • [4] J.-L. Bouchot, B. Bykowski, H. Rauhut, and C. Schwab, Compressed sensing Petrov-Galerkin approximations for parametric PDEs, Proceedings SampTA 2015 (2015).
  • [5] Stephen Boyd and Lieven Vandenberghe, Convex optimization, Cambridge University Press, Cambridge, 2004. MR 2061575
  • [6] Annalisa Buffa, Yvon Maday, Anthony T. Patera, Christophe Prud'homme, and Gabriel Turinici, A priori convergence of the greedy algorithm for the parametrized reduced basis method, ESAIM Math. Model. Numer. Anal. 46 (2012), no. 3, 595-603. MR 2877366, https://doi.org/10.1051/m2an/2011056
  • [7] Hans-Joachim Bungartz and Michael Griebel, Sparse grids, Acta Numer. 13 (2004), 147-269. MR 2249147 (2007e:65102), https://doi.org/10.1017/S0962492904000182
  • [8] T. Tony Cai and Anru Zhang, Sparse representation of a polytope and recovery in sparse signals and low-rank matrices, IEEE Trans. Inform. Theory 60 (2014), no. 1, 122-132. MR 3150915, https://doi.org/10.1109/TIT.2013.2288639
  • [9] Emmanuel J. Candès, The restricted isometry property and its implications for compressed sensing, C. R. Math. Acad. Sci. Paris 346 (2008), no. 9-10, 589-592 (English, with English and French summaries). MR 2412803 (2009b:65104), https://doi.org/10.1016/j.crma.2008.03.014
  • [10] Emmanuel J. Candès, Justin K. Romberg, and Terence Tao, Stable signal recovery from incomplete and inaccurate measurements, Comm. Pure Appl. Math. 59 (2006), no. 8, 1207-1223. MR 2230846 (2007f:94007), https://doi.org/10.1002/cpa.20124
  • [11] Emmanuel J. Candes and Terence Tao, Near-optimal signal recovery from random projections: universal encoding strategies?, IEEE Trans. Inform. Theory 52 (2006), no. 12, 5406-5425. MR 2300700 (2008c:94009), https://doi.org/10.1109/TIT.2006.885507
  • [12] Antonin Chambolle and Thomas Pock, A first-order primal-dual algorithm for convex problems with applications to imaging, J. Math. Imaging Vision 40 (2011), no. 1, 120-145. MR 2782122 (2012b:94004), https://doi.org/10.1007/s10851-010-0251-1
  • [13] Abdellah Chkifa, Albert Cohen, and Christoph Schwab, High-dimensional adaptive sparse polynomial interpolation and applications to parametric PDEs, Found. Comput. Math. 14 (2014), no. 4, 601-633. MR 3230010, https://doi.org/10.1007/s10208-013-9154-z
  • [14] Abdellah Chkifa, Albert Cohen, and Christoph Schwab, Breaking the curse of dimensionality in sparse polynomial approximation of parametric PDEs, J. Math. Pures Appl. (9) 103 (2015), no. 2, 400-428 (English, with English and French summaries). MR 3298364, https://doi.org/10.1016/j.matpur.2014.04.009
  • [15] Albert Cohen, Mark A. Davenport, and Dany Leviatan, On the stability and accuracy of least squares approximations, Found. Comput. Math. 13 (2013), no. 5, 819-834. MR 3105946, https://doi.org/10.1007/s10208-013-9142-3
  • [16] A. Cohen, R. DeVore, and C. Schwab, Convergence rates of best N-term Galerkin approximations for a class of elliptic sPDEs, Found. Comput. Math. 10 (2010), no. 6, 615-646.
  • [17] Albert Cohen, Ronald Devore, and Christoph Schwab, Analytic regularity and polynomial approximation of parametric and stochastic elliptic PDE's, Anal. Appl. (Singap.) 9 (2011), no. 1, 11-47. MR 2763359 (2012a:35049), https://doi.org/10.1142/S0219530511001728
  • [18] J. Dick, F. Kuo, Q. T. Le Gia, D. Nuyens, and C. Schwab, Higher order QMC Galerkin discretization for parametric operator equations, Tech. Report 6, 2014.
  • [19] J. Dick, F. Y. Kuo, Q. T. Le Gia, and C. Schwab, Multi-level higher order QMC Galerkin discretization for affine parametric operator equations, Tech. Report 2014-14, Seminar for Applied Mathematics, ETH Zürich, 2014.
  • [20] Josef Dick, Frances Y. Kuo, Quoc T. Le Gia, Dirk Nuyens, and Christoph Schwab, Higher order QMC Petrov-Galerkin discretization for affine parametric operator equations with random field inputs, SIAM J. Numer. Anal. 52 (2014), no. 6, 2676-2702. MR 3276428, https://doi.org/10.1137/130943984
  • [21] Michael Döhler, Stefan Kunis, and Daniel Potts, Nonequispaced hyperbolic cross fast Fourier transform, SIAM J. Numer. Anal. 47 (2010), no. 6, 4415-4428. MR 2585193, https://doi.org/10.1137/090754947
  • [22] Alireza Doostan and Houman Owhadi, A non-adapted sparse approximation of PDEs with stochastic inputs, J. Comput. Phys. 230 (2011), no. 8, 3015-3034. MR 2774328 (2012f:60239), https://doi.org/10.1016/j.jcp.2011.01.002
  • [23] Martin Eigel, Claude Jeffrey Gittelson, Christoph Schwab, and Elmar Zander, Adaptive stochastic Galerkin FEM, Comput. Methods Appl. Mech. Engrg. 270 (2014), 247-269. MR 3154028, https://doi.org/10.1016/j.cma.2013.11.015
  • [24] Martin Eigel, Claude Jeffrey Gittelson, Christoph Schwab, and Elmar Zander, A convergent adaptive stochastic Galerkin finite element method with quasi-optimal spatial meshes, ESAIM Math. Model. Numer. Anal. 49 (2015), no. 5, 1367-1398. MR 3423228, https://doi.org/10.1051/m2an/2015017
  • [25] J. Fell, F. Krahmer, and H. Rauhut, Recovery of sparse wavelet expansions from random Fourier samples via weighted iterative hard thresholding, in preparation (2015).
  • [26] Simon Foucart and Holger Rauhut, A mathematical introduction to compressive sensing, Applied and Numerical Harmonic Analysis, Birkhäuser/Springer, New York, 2013. MR 3100033
  • [27] T. Gerstner and M. Griebel, Dimension-adaptive tensor-product quadrature, Computing 71 (2003), no. 1, 65-87. MR 2009651 (2004k:65036), https://doi.org/10.1007/s00607-003-0015-5
  • [28] Claude Jeffrey Gittelson, Adaptive wavelet methods for elliptic partial differential equations with random operators, Numer. Math. 126 (2014), no. 3, 471-513. MR 3164144, https://doi.org/10.1007/s00211-013-0572-2
  • [29] Jerrad Hampton and Alireza Doostan, Compressive sampling of polynomial chaos expansions: convergence analysis and sampling strategies, J. Comput. Phys. 280 (2015), 363-386. MR 3273141, https://doi.org/10.1016/j.jcp.2014.09.019
  • [30] Markus Hansen and Christoph Schwab, Analytic regularity and nonlinear approximation of a class of parametric semilinear elliptic PDEs, Math. Nachr. 286 (2013), no. 8-9, 832-860. MR 3066404, https://doi.org/10.1002/mana.201100131
  • [31] Markus Hansen and Christoph Schwab, Sparse adaptive approximation of high dimensional parametric initial value problems, Vietnam J. Math. 41 (2013), no. 2, 181-215. MR 3089816, https://doi.org/10.1007/s10013-013-0011-9
  • [32] V. H. Hoang and Ch. Schwab, Analytic regularity and polynomial approximation of stochastic, parametric elliptic multiscale PDEs, Anal. Appl. (Singap.) 11 (2013), no. 1, 1350001, 50. MR 3019506, https://doi.org/10.1142/S0219530513500012
  • [33] J. D. Jakeman, M. S. Eldred, and K. Sargsyan, Enhancing $ \ell _1$-minimization estimates of polynomial chaos expansions using basis selection, J. Comput. Phys. 289 (2015), 18-34. MR 3322818, https://doi.org/10.1016/j.jcp.2015.02.025
  • [34] R. James, M. Dennis, and D. N. Rockmore, Fast discrete polynomial transforms with applications to data analysis for distance transitive graphs, SIAM J. Comput. 26 (1997), no. 4, 1066-1099.
  • [35] J. Jo, Iterative hard thresholding for weighted sparse approximation, preprint arXiv:1312.3582 (2013).
  • [36] Boris N. Khoromskij and Christoph Schwab, Tensor-structured Galerkin approximation of parametric and stochastic elliptic PDEs, SIAM J. Sci. Comput. 33 (2011), no. 1, 364-385. MR 2783199 (2012e:65273), https://doi.org/10.1137/100785715
  • [37] Frances Y. Kuo, Christoph Schwab, and Ian H. Sloan, Quasi-Monte Carlo finite element methods for a class of elliptic partial differential equations with random coefficients, SIAM J. Numer. Anal. 50 (2012), no. 6, 3351-3374. MR 3024159, https://doi.org/10.1137/110845537
  • [38] G. Migliorati, F. Nobile, E. von Schwerin, and R. Tempone, Analysis of discrete $ L^2$ projection on polynomial spaces with random evaluations, Found. Comput. Math. 14 (2014), no. 3, 419-456. MR 3201952, https://doi.org/10.1007/s10208-013-9186-4
  • [39] Siddhartha Mishra, Christoph Schwab, and Jonas Šukys, Multi-level Monte Carlo finite volume methods for uncertainty quantification in nonlinear systems of balance laws, Uncertainty quantification in computational fluid dynamics, Lect. Notes Comput. Sci. Eng., vol. 92, Springer, Heidelberg, 2013, pp. 225-294. MR 3202526, https://doi.org/10.1007/978-3-319-00885-1_6
  • [40] Victor Nistor and Christoph Schwab, High-order Galerkin approximations for parametric second-order elliptic partial differential equations, Math. Models Methods Appl. Sci. 23 (2013), no. 9, 1729-1760. MR 3062926, https://doi.org/10.1142/S0218202513500218
  • [41] F. Nobile, R. Tempone, and C. G. Webster, An anisotropic sparse grid stochastic collocation method for partial differential equations with random input data, SIAM J. Numer. Anal. 46 (2008), no. 5, 2411-2442. MR 2421041 (2009c:65331), https://doi.org/10.1137/070680540
  • [42] F. Nobile, R. Tempone, and C. G. Webster, A sparse grid stochastic collocation method for partial differential equations with random input data, SIAM J. Numer. Anal. 46 (2008), no. 5, 2309-2345. MR 2421037 (2009f:65257), https://doi.org/10.1137/060663660
  • [43] N. Parikh and S. Boyd, Proximal algorithms, Foundations and Trends in Optimization 1 (2013), no. 3, 123-231.
  • [44] Ji Peng, Jerrad Hampton, and Alireza Doostan, A weighted $ \ell _1$-minimization approach for sparse polynomial chaos expansions, J. Comput. Phys. 267 (2014), 92-111. MR 3183283, https://doi.org/10.1016/j.jcp.2014.02.024
  • [45] Daniel Potts, Fast algorithms for discrete polynomial transforms on arbitrary grids, Linear Algebra Appl. 366 (2003), 353-370. Special issue on structured matrices: analysis, algorithms and applications (Cortona, 2000). MR 1987729, https://doi.org/10.1016/S0024-3795(02)00592-X
  • [46] Daniel Potts, Gabriele Steidl, and Manfred Tasche, Fast Fourier transforms for nonequispaced data: a tutorial, Modern sampling theory, Appl. Numer. Harmon. Anal., Birkhäuser Boston, Boston, MA, 2001, pp. 247-270. MR 1865690
  • [47] Holger Rauhut, Stability results for random sampling of sparse trigonometric polynomials, IEEE Trans. Inform. Theory 54 (2008), no. 12, 5661-5670. MR 2596806 (2010h:94061), https://doi.org/10.1109/TIT.2008.2006382
  • [48] Holger Rauhut, Compressive sensing and structured random matrices, Theoretical foundations and numerical methods for sparse recovery, Radon Ser. Comput. Appl. Math., vol. 9, Walter de Gruyter, Berlin, 2010, pp. 1-92. MR 2731597 (2012h:94098), https://doi.org/10.1515/9783110226157.1
  • [49] Holger Rauhut and Rachel Ward, Sparse Legendre expansions via $ \ell _1$-minimization, J. Approx. Theory 164 (2012), no. 5, 517-533. MR 2903116, https://doi.org/10.1016/j.jat.2012.01.008
  • [50] Holger Rauhut and Rachel Ward, Interpolation via weighted $ \ell _1$ minimization, Appl. Comput. Harmon. Anal. 40 (2016), no. 2, 321-351. MR 3440176, https://doi.org/10.1016/j.acha.2015.02.003
  • [51] Theodore J. Rivlin, Chebyshev polynomials, 2nd ed., Pure and Applied Mathematics (New York), John Wiley & Sons, Inc., New York, 1990. From approximation theory to algebra and number theory. MR 1060735
  • [52] Mark Rudelson and Roman Vershynin, On sparse reconstruction from Fourier and Gaussian measurements, Comm. Pure Appl. Math. 61 (2008), no. 8, 1025-1045. MR 2417886 (2009e:94034), https://doi.org/10.1002/cpa.20227
  • [53] Khachik Sargsyan, Cosmin Safta, Habib N. Najm, Bert J. Debusschere, Daniel Ricciuto, and Peter Thornton, Dimensionality reduction for complex models via Bayesian compressive sensing, Int. J. Uncertain. Quantif. 4 (2014), no. 1, 63-93. MR 3249677, https://doi.org/10.1615/Int.J.UncertaintyQuantification.2013006821
  • [54] Claudia Schillings and Christoph Schwab, Sparse, adaptive Smolyak quadratures for Bayesian inverse problems, Inverse Problems 29 (2013), no. 6, 065011, 28. MR 3056084, https://doi.org/10.1088/0266-5611/29/6/065011
  • [55] Cl. Schillings and Ch. Schwab, Sparsity in Bayesian inversion of parametric operator equations, Inverse Problems 30 (2014), no. 6, 065007, 30. MR 3224127, https://doi.org/10.1088/0266-5611/30/6/065007
  • [56] Christoph Schwab, QMC Galerkin discretization of parametric operator equations, Monte Carlo and quasi-Monte Carlo methods 2012, Springer Proc. Math. Stat., vol. 65, Springer, Heidelberg, 2013, pp. 613-629. MR 3145588, https://doi.org/10.1007/978-3-642-41095-6_32
  • [57] C. Schwab and C. Schillings, Sparse quadrature approach to Bayesian inverse problems, Oberwolfach Reports (2013), 10, no. 3, p. 2237.
  • [58] Christoph Schwab and Rob Stevenson, Space-time adaptive wavelet methods for parabolic evolution problems, Math. Comp. 78 (2009), no. 267, 1293-1318. MR 2501051 (2011a:65347), https://doi.org/10.1090/S0025-5718-08-02205-9
  • [59] Gary Tang and Gianluca Iaccarino, Subsampled Gauss quadrature nodes for estimating polynomial chaos expansions, SIAM/ASA J. Uncertain. Quantif. 2 (2014), no. 1, 423-443. MR 3283915, https://doi.org/10.1137/130913511
  • [60] Hoang Tran, Clayton Webster, and Guannan Zhang, Analysis of quasi-optimal polynomial approximations for parameterized PDEs with deterministic and stochastic coefficients, Tech. Report ORNL/TM-2014/469, Oak Ridge National Laboratory, October 2014.
  • [61] Lloyd N. Trefethen, Approximation theory and approximation practice, Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA, 2013. MR 3012510
  • [62] D. Needell and J. A. Tropp, CoSaMP: iterative signal recovery from incomplete and inaccurate samples, Appl. Comput. Harmon. Anal. 26 (2009), no. 3, 301-321. MR 2502366 (2010c:94018), https://doi.org/10.1016/j.acha.2008.07.002
  • [63] P. Wojtaszczyk, Stability and instance optimality for Gaussian measurements in compressed sensing, Found. Comput. Math. 10 (2010), no. 1, 1-13. MR 2591836 (2010m:94053), https://doi.org/10.1007/s10208-009-9046-4
  • [64] X. Yang and G. E. Karniadakis, Reweighted $ \ell _1$ minimization method for stochastic elliptic differential equations, Journal of Computational Physics 248 (2013), 87-108.

Similar Articles

Retrieve articles in Mathematics of Computation with MSC (2010): 35B30, 65N30, 41A58, 94A20

Retrieve articles in all journals with MSC (2010): 35B30, 65N30, 41A58, 94A20


Additional Information

Holger Rauhut
Affiliation: Lehrstuhl C für Mathematik (Analysis), RWTH Aachen University, Pontdriesch 10, 52062 Aachen, Germany
Email: rauhut@mathc.rwth-aachen.de

Christoph Schwab
Affiliation: Seminar for Applied Mathematics, ETH Zurich, Rämistr. 101, 8092 Zürich, Switzerland
Email: christoph.schwab@sam.math.ethz.ch

DOI: https://doi.org/10.1090/mcom/3113
Keywords: Compressive sensing, affine-parametric operator equations, parametric diffusion equation, $s$-term approximation, high-dimensional approximation, tensorized Chebyshev polynomial chaos approximation
Received by editor(s): October 18, 2014
Received by editor(s) in revised form: August 7, 2015, and September 21, 2015
Published electronically: May 17, 2016
Article copyright: © Copyright 2016 American Mathematical Society

American Mathematical Society