TY - JOUR
T1 - Extended polynomial growth transforms for design and training of generalized support vector machines
AU - Gangopadhyay, Ahana
AU - Chatterjee, Oindrila
AU - Chakrabartty, Shantanu
N1 - Publisher Copyright:
© 2012 IEEE.
PY - 2018/5
Y1 - 2018/5
N2 - Growth transformations constitute a class of fixed-point multiplicative update algorithms that were originally proposed for optimizing polynomial and rational functions over a domain of probability measures. In this paper, we extend this framework to the domain of bounded real variables which can be applied towards optimizing the dual cost function of a generic support vector machine (SVM). The approach can, therefore, not only be used to train traditional soft-margin binary SVMs, one-class SVMs, and probabilistic SVMs but can also be used to design novel variants of SVMs with different types of convex and quasi-convex loss functions. In this paper, we propose an efficient training algorithm based on polynomial growth transforms, and compare and contrast the properties of different SVM variants using several synthetic and benchmark data sets. The preliminary experiments show that the proposed multiplicative update algorithm is more scalable and yields better convergence compared to standard quadratic and nonlinear programming solvers. While the formulation and the underlying algorithms have been validated in this paper only for SVM-based learning, the proposed approach is general and can be applied to a wide variety of optimization problems and statistical learning models.
AB - Growth transformations constitute a class of fixed-point multiplicative update algorithms that were originally proposed for optimizing polynomial and rational functions over a domain of probability measures. In this paper, we extend this framework to the domain of bounded real variables which can be applied towards optimizing the dual cost function of a generic support vector machine (SVM). The approach can, therefore, not only be used to train traditional soft-margin binary SVMs, one-class SVMs, and probabilistic SVMs but can also be used to design novel variants of SVMs with different types of convex and quasi-convex loss functions. In this paper, we propose an efficient training algorithm based on polynomial growth transforms, and compare and contrast the properties of different SVM variants using several synthetic and benchmark data sets. The preliminary experiments show that the proposed multiplicative update algorithm is more scalable and yields better convergence compared to standard quadratic and nonlinear programming solvers. While the formulation and the underlying algorithms have been validated in this paper only for SVM-based learning, the proposed approach is general and can be applied to a wide variety of optimization problems and statistical learning models.
KW - Baum-Eagon inequality
KW - Binary classification
KW - Fixed point optimization
KW - Growth transforms
KW - Loss functions
KW - Maximum margin classifiers
KW - One-class support vector Mach-ines (SVMs)
KW - Quasi-convex loss functions
KW - SVMs
KW - Sparse SVMs
UR - http://www.scopus.com/inward/record.url?scp=85018623596&partnerID=8YFLogxK
U2 - 10.1109/TNNLS.2017.2690434
DO - 10.1109/TNNLS.2017.2690434
M3 - Article
C2 - 28436898
AN - SCOPUS:85018623596
SN - 2162-237X
VL - 29
SP - 1961
EP - 1974
JO - IEEE Transactions on Neural Networks and Learning Systems
JF - IEEE Transactions on Neural Networks and Learning Systems
IS - 5
ER -