Abstract
This paper mainly focuses on the least square regularized regression learning algorithm in a setting of unbounded sampling. Our task is to establish learning rates by means of integral operators. By imposing a moment hypothesis on the unbounded sampling outputs and a function space condition associated with marginal distribution ρ X , we derive learning rates which are consistent with those in the bounded sampling setting.
Similar content being viewed by others
References
Bennett C., Sharpley R.: Interpolation of Operators. Academic Press, Boston (1988)
Bennett G.: Probability inequalities for the sum of indpendent random variables. J. Am. Stat. Assoc. 57, 33–45 (1962)
Caponnetto A., De Vito E.: Optimal rates for regularized least squares algorithm. Found. Comput. Math. 7, 331–368 (2007)
Cucker F., Smale S.: On the mathmatical foundations of learning. Am. Math. Soc. 39, 1–49 (2001)
Cucker F., Zhou D.X.: Learning Theory: An Approximation Theory Viewpoint. Cambridge University Press, Cambridge (2007)
Evgeniou T., Pontil M., Poggio T.: Reguarization networks and support vector machines. Adv. Comput. Math. 13, 1–50 (2000)
Guo, Z.C., Zhou, D.X.: Concentration estimates for learning with unbounded sampling (preprint, 2010)
Mendelson S., Neeman J.: Regularization in kernel learning. Ann. Stat. 38, 526–565 (2010)
Pinelis I.F., Sakhanenko A.I.: Remarks on inequalities for probabilities of large deviations. Theory Probab. Appl. 30, 143–148 (1985)
Smale S., Zhou D.X.: Shannon sampling II. Connection to learning theory. Appl. Comput. Harmonic Anal. 19, 285–312 (2005)
Smale S., Zhou D.X.: Learning Theory estimates via intergal operators and their approximations. Constr. Approx. 26, 153–172 (2007)
Smale S., Zhou D.X.: Geometry on probability spaces. Constr. Approx. 30, 311–323 (2009)
Smale S., Zhou D.X.: Online learning with Markov sampling. Anal. Appl. 7, 87–113 (2009)
Steinwart, I., Hush, D., Scovel, C.: Optimal rates for regularized least squares regression. In: Dasgupta, S., Klivans, A. (eds.) Proceedings of the 22nd Annual Conference on Learning Theory, pp. 79–93 (2009)
Sun H.W., Wu Q.: Regularized least equare regression with dependent samples. Adv. Comput. Math. 11, 235–249 (2008)
Sun H.W., Wu Q.: A note on application of integral operator in learning theory. Appl. Comput. Harmonic Anal. 26, 416–421 (2009)
Wang C., Zhou D.X.: Optimal learning rates for least square regularized regression with unbounded sampling. J. Complexity 27, 55–67 (2011)
Wu Q., Ying Y.M., Zhou D.X.: Learning rates of least-square regularized regression. Found. Comput. Math. 6, 171–192 (2006)
Ye G.B., Zhou D.X.: SVM learning and L p approximation by Gaussians on Riemannian manifolds. Anal. Appl. 7, 309–339 (2009)
Yurinsky Y.: Sums and Gaussian Vectors. Lecture Notes in Mathematics. Springer, Berlin, Heidelberg (1995)
Zhang T.: Leave-one-out bounds for kernel methods. Neural Comput. 15, 1397–1437 (2003)
Zhou D.X.: Capacity of reproducing kernel spaces in learning theory. IEEE Trans. Inform. Theory 49, 1743–1752 (2003)
Author information
Authors and Affiliations
Corresponding author
Additional information
Communicated by L. Littlejohn and J. Stochel.
Rights and permissions
About this article
Cite this article
Lv, SG., Feng, YL. Integral Operator Approach to Learning Theory with Unbounded Sampling. Complex Anal. Oper. Theory 6, 533–548 (2012). https://doi.org/10.1007/s11785-011-0139-0
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11785-011-0139-0
Keywords
- Least square regularized regression
- Reproducing kernel Hilbert spaces
- Integral operator
- Capacity independent error bounds