Learning visual behavior for gesture analysis

  • Andrew D. Wilson
  • , Aaron F. Bobick

Research output: Contribution to conferencePaperpeer-review

56 Scopus citations

Abstract

A state-based method for learning visual behavior from image sequences is presented. The technique is novel for its incorporation of multiple representations into the Hidden Markov Model framework. Independent representations of the instantaneous visual input at each state of the Markov model are estimated concurrently with the learning of the temporal characteristics. Measures of the degree to which each representation describes the input are combined to determine an input's overall membership to a state. We exploit two constraints allowing application of the technique to view-based gesture recognition: gestures are modal in the space of possible human motion, and gestures are viewpoint-dependent. The recovery of the visual behavior of a number of simple gestures with a small number of low resolution image sequences is shown.

Original languageEnglish
Pages229-234
Number of pages6
StatePublished - 1995
EventInternational Symposium on Computer Vision, ISCV'95, Proceedings - Coral Gables, FL, USA
Duration: Nov 21 1995Nov 23 1995

Conference

ConferenceInternational Symposium on Computer Vision, ISCV'95, Proceedings
CityCoral Gables, FL, USA
Period11/21/9511/23/95

Fingerprint

Dive into the research topics of 'Learning visual behavior for gesture analysis'. Together they form a unique fingerprint.

Cite this