Classification of temporal sequences via prediction using the simple recurrent neural network

Lalit Gupta, Mark McAvoy, James Phegley

Research output: Contribution to journalArticlepeer-review

33 Scopus citations

Abstract

An approach to classify temporal sequences using the simple recurrent neural network (SRNN) is developed in this paper. A classification problem is formulated as a component prediction problem and two training methods are described to train a single SRNN to predict the components of temporal sequences belonging to multiple classes. Issues related to the selection of the dimension of the context vector and the influence of the context vector on classification are identified and investigated. The use of a different initial context vector for each class is proposed as a means to improve classification and a classification rule which incorporates the different initial context vectors is formulated. A systematic method in which the SRNN is trained with noisy exemplars is developed to enhance the classification performance of the network. A 4-class localized object classification problem is selected to demonstrate that (a) a single SRNN can be trained to classify real multi-class sequences via component prediction, (b) the classification accuracy can be improved by using a distinguishing initial context vector for each class, and (c) the classification accuracy of the SRNN can be improved significantly by using the distinguishing initial context vector in conjunction with the systematic re-training method. It is concluded that, through the approach developed in this paper, the SRNN can robustly classify temporal sequences which may have an unequal number of components.

Original languageEnglish
Pages (from-to)1759-1770
Number of pages12
JournalPattern Recognition
Volume33
Issue number10
DOIs
StatePublished - 2000

Fingerprint

Dive into the research topics of 'Classification of temporal sequences via prediction using the simple recurrent neural network'. Together they form a unique fingerprint.

Cite this