Kernel methods, such as the support vector machine (SVM), are
often formulated as quadratic programming (QP) problems. However,
given m training patterns,
a naive implementation of the QP solver takes O(m^3) training time
and at least O(m^2) space.
Hence,
scaling up these QP's is a major stumbling block in applying
kernel methods on very large data sets,
and a replacement of the naive method for finding the QP solutions
is highly desirable.
Recently, by using
approximation algorithms for the minimum enclosing ball (MEB)
problem, we proposed the Core Vector Machine (CVM) algorithm that
is much faster and can handle much larger data sets than existing
SVM implementations. However, the CVM can only be used with
certain kernel functions and kernel methods. For example, the very
popular support vector regression (SVR) cannot be used with the
CVM. In this paper, we introduce the center-constrained MEB
problem and subsequently extend the CVM algorithm. The *
generalized CVM* algorithm can now be used with any
linear/nonlinear kernel and can also be applied to kernel methods
such as SVR and the ranking SVM. Moreover, like the original CVM, its
asymptotic time
complexity is again linear in m
and its space complexity is independent of m. Experiments show
that the generalized CVM has comparable performance with
state-of-the-art SVM and SVR implementations, but is faster
and produces fewer support vectors on very large data sets.
*IEEE Transactions on Neural Networks*, 17(5):1126-1140, Sept 2006.

Pdf:
http://www.cse.ust.hk/~jamesk/papers/tnn06b.pdf

Back to James Kwok's home page.