Skip to main content

Fast Rates for Support Vector Machines

  • Conference paper
Learning Theory (COLT 2005)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 3559))

Included in the following conference series:

Abstract

We establish learning rates to the Bayes risk for support vector machines (SVMs) using a regularization sequence \({\it \lambda}_{n}={\it n}^{-\rm \alpha}\), where \({\it \alpha}\in\)(0,1) is arbitrary. Under a noise condition recently proposed by Tsybakov these rates can become faster than n − 1/2. In order to deal with the approximation error we present a general concept called the approximation error function which describes how well the infinite sample versions of the considered SVMs approximate the data-generating distribution. In addition we discuss in some detail the relation between the “classical” approximation error and the approximation error function. Finally, for distributions satisfying a geometric noise assumption we establish some learning rates when the used RKHS is a Sobolev space.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
eBook
USD 84.99
Price excludes VAT (USA)
Softcover Book
USD 109.99
Price excludes VAT (USA)

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Similar content being viewed by others

References

  1. Devroye, L., Györfi, L., Lugosi, G.: A Probabilistic Theory of Pattern Recognition. Springer, New York (1996)

    MATH  Google Scholar 

  2. Yang, Y.: Minimax nonparametric classification—part I and II. IEEE Trans. Inform. Theory 45, 2271–2292 (1999)

    Article  MATH  MathSciNet  Google Scholar 

  3. Wu, Q., Zhou, D.X.: Analysis of support vector machine classification. Tech. Report, City University of Hong Kong (2003)

    Google Scholar 

  4. Mammen, E., Tsybakov, A.: Smooth discrimination analysis. Ann. Statist. 27, 1808–1829 (1999)

    Article  MATH  MathSciNet  Google Scholar 

  5. Tsybakov, A.: Optimal aggregation of classifiers in statistical learning. Ann. Statist. 32, 135–166 (2004)

    Article  MATH  MathSciNet  Google Scholar 

  6. Schölkopf, B., Smola, A.: Learning with Kernels. MIT Press, Cambridge (2002)

    Google Scholar 

  7. Steinwart, I., Scovel, C.: Fast rates for support vector machines using Gaussian kernels. Ann. Statist. submitted (2004), http://www.c3.lanl.gov/~ingo/publications/ann-04a.pdf

  8. Steinwart, I.: Consistency of support vector machines and other regularized kernel machines. IEEE Trans. Inform. Theory 51, 128–142 (2005)

    Article  MathSciNet  Google Scholar 

  9. Steinwart, I., Scovel, C.: Fast rates to bayes for kernel machines. In: Saul, L.K., Weiss, Y., Bottou, L. (eds.) Advances in Neural Information Processing Systems, vol. 17, pp. 1345–1352. MIT Press, Cambridge (2005)

    Google Scholar 

  10. Edmunds, D., Triebel, H.: Function Spaces, Entropy Numbers, Differential Operators. Cambridge University Press, Cambridge (1996)

    Book  MATH  Google Scholar 

  11. Zhang, T.: Statistical behaviour and consistency of classification methods based on convex risk minimization. Ann. Statist. 32, 56–134 (2004)

    Article  MATH  MathSciNet  Google Scholar 

  12. Rockafellar, R.: Convex Analysis. Princeton University Press, Princeton (1970)

    MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2005 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Steinwart, I., Scovel, C. (2005). Fast Rates for Support Vector Machines. In: Auer, P., Meir, R. (eds) Learning Theory. COLT 2005. Lecture Notes in Computer Science(), vol 3559. Springer, Berlin, Heidelberg. https://doi.org/10.1007/11503415_19

Download citation

  • DOI: https://doi.org/10.1007/11503415_19

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-26556-6

  • Online ISBN: 978-3-540-31892-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics