Skip to main content
Log in

Integral Operator Approach to Learning Theory with Unbounded Sampling

  • Published:
Complex Analysis and Operator Theory Aims and scope Submit manuscript

Abstract

This paper mainly focuses on the least square regularized regression learning algorithm in a setting of unbounded sampling. Our task is to establish learning rates by means of integral operators. By imposing a moment hypothesis on the unbounded sampling outputs and a function space condition associated with marginal distribution ρ X , we derive learning rates which are consistent with those in the bounded sampling setting.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Bennett C., Sharpley R.: Interpolation of Operators. Academic Press, Boston (1988)

    MATH  Google Scholar 

  2. Bennett G.: Probability inequalities for the sum of indpendent random variables. J. Am. Stat. Assoc. 57, 33–45 (1962)

    MATH  Google Scholar 

  3. Caponnetto A., De Vito E.: Optimal rates for regularized least squares algorithm. Found. Comput. Math. 7, 331–368 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  4. Cucker F., Smale S.: On the mathmatical foundations of learning. Am. Math. Soc. 39, 1–49 (2001)

    Article  MathSciNet  Google Scholar 

  5. Cucker F., Zhou D.X.: Learning Theory: An Approximation Theory Viewpoint. Cambridge University Press, Cambridge (2007)

    Book  MATH  Google Scholar 

  6. Evgeniou T., Pontil M., Poggio T.: Reguarization networks and support vector machines. Adv. Comput. Math. 13, 1–50 (2000)

    Article  MathSciNet  MATH  Google Scholar 

  7. Guo, Z.C., Zhou, D.X.: Concentration estimates for learning with unbounded sampling (preprint, 2010)

  8. Mendelson S., Neeman J.: Regularization in kernel learning. Ann. Stat. 38, 526–565 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  9. Pinelis I.F., Sakhanenko A.I.: Remarks on inequalities for probabilities of large deviations. Theory Probab. Appl. 30, 143–148 (1985)

    Article  MathSciNet  Google Scholar 

  10. Smale S., Zhou D.X.: Shannon sampling II. Connection to learning theory. Appl. Comput. Harmonic Anal. 19, 285–312 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  11. Smale S., Zhou D.X.: Learning Theory estimates via intergal operators and their approximations. Constr. Approx. 26, 153–172 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  12. Smale S., Zhou D.X.: Geometry on probability spaces. Constr. Approx. 30, 311–323 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  13. Smale S., Zhou D.X.: Online learning with Markov sampling. Anal. Appl. 7, 87–113 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  14. Steinwart, I., Hush, D., Scovel, C.: Optimal rates for regularized least squares regression. In: Dasgupta, S., Klivans, A. (eds.) Proceedings of the 22nd Annual Conference on Learning Theory, pp. 79–93 (2009)

  15. Sun H.W., Wu Q.: Regularized least equare regression with dependent samples. Adv. Comput. Math. 11, 235–249 (2008)

    Google Scholar 

  16. Sun H.W., Wu Q.: A note on application of integral operator in learning theory. Appl. Comput. Harmonic Anal. 26, 416–421 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  17. Wang C., Zhou D.X.: Optimal learning rates for least square regularized regression with unbounded sampling. J. Complexity 27, 55–67 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  18. Wu Q., Ying Y.M., Zhou D.X.: Learning rates of least-square regularized regression. Found. Comput. Math. 6, 171–192 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  19. Ye G.B., Zhou D.X.: SVM learning and L p approximation by Gaussians on Riemannian manifolds. Anal. Appl. 7, 309–339 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  20. Yurinsky Y.: Sums and Gaussian Vectors. Lecture Notes in Mathematics. Springer, Berlin, Heidelberg (1995)

    Google Scholar 

  21. Zhang T.: Leave-one-out bounds for kernel methods. Neural Comput. 15, 1397–1437 (2003)

    Article  MATH  Google Scholar 

  22. Zhou D.X.: Capacity of reproducing kernel spaces in learning theory. IEEE Trans. Inform. Theory 49, 1743–1752 (2003)

    Article  MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Shao-Gao Lv.

Additional information

Communicated by L. Littlejohn and J. Stochel.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Lv, SG., Feng, YL. Integral Operator Approach to Learning Theory with Unbounded Sampling. Complex Anal. Oper. Theory 6, 533–548 (2012). https://doi.org/10.1007/s11785-011-0139-0

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11785-011-0139-0

Keywords

Mathematics Subject Classification

Navigation