When movements are perturbed in adaptation tasks, humans and other animals show incomplete compensation, tolerating small but sustained residual errors that persist despite repeated trials. State-space models explain this residual asymptotic error as interplay between learning from error and reversion to baseline, a form of forgetting. Previous work using zero-error-clamp trials has shown that reversion to baseline is not obligatory and can be overcome by manipulating feedback.Weposited that novel error-clamp trials, in which feedback is constrained but has nonzero error and variance, might serve as a contextual cue for recruitment of other learning mechanisms that would then close the residual error. When error clamps were nonzero and had zero variance, human subjects changed their learning policy, using exploration in response to the residual error, despite their willingness to sustain such an error during the training block. In contrast, when the distribution of feedback in clamp trials was naturalistic, with persistent mean error but also with variance, a statespace model accounted for behavior in clamps, even in the absence of task success. Therefore, when the distribution of errors matched those during training, state-space models captured behavior during both adaptation and error-clamp trials because error-based learning dominated;whenthe distribution of feedback was altered, other forms of learning were triggered that did not follow the state-space model dynamics exhibited during training. The residual error during adaptation appears attributable to an error-dependent learning process that has the property of reversion toward baseline and that can suppress other forms of learning.
- Motor learning