Forward-Decoding Kernel-Based Phone Sequence Recognition

Shantanu Chakrabartty, Gert Cauwenberghs

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

2 Scopus citations

Abstract

Forward decoding kernel machines (FDKM) combine large-margin classifiers with hidden Markov models (HMM) for maximum a posteriori (MAP) adaptive sequence estimation. State transitions in the sequence are conditioned on observed data using a kernel-based probability model trained with a recursive scheme that deals effectively with noisy and partially labeled data. Training over very large datasets is accomplished using a sparse probabilistic support vector machine (SVM) model based on quadratic entropy, and an on-line stochastic steepest descent algorithm. For speaker-independent continuous phone recognition, FDKM trained over 177,080 samples of the TlMIT database achieves 80.6% recognition accuracy over the full test set, without use of a prior phonetic language model.

Original languageEnglish
Title of host publicationNIPS 2002
Subtitle of host publicationProceedings of the 15th International Conference on Neural Information Processing Systems
EditorsSuzanna Becker, Sebastian Thrun, Klaus Obermayer
PublisherMIT Press Journals
Pages1165-1172
Number of pages8
ISBN (Electronic)0262025507, 9780262025508
StatePublished - 2002
Event15th International Conference on Neural Information Processing Systems, NIPS 2002 - Vancouver, Canada
Duration: Dec 9 2002Dec 14 2002

Publication series

NameNIPS 2002: Proceedings of the 15th International Conference on Neural Information Processing Systems

Conference

Conference15th International Conference on Neural Information Processing Systems, NIPS 2002
Country/TerritoryCanada
CityVancouver
Period12/9/0212/14/02

Fingerprint

Dive into the research topics of 'Forward-Decoding Kernel-Based Phone Sequence Recognition'. Together they form a unique fingerprint.

Cite this