This article describes further study of the finding reported by Green et al. [J. Acoust. Soc. Am. 73, 639–643 (1983)] and others that, in certain conditions, the threshold of detectability for an intensity increment to the center tone of a multitone reference spectrum decreased as the number of nonsignal tones increased. That result was considered remarkable since critical-band theory would predict that these nonsignal tones, spaced outside the “critical band” containing the signal, would have no effect on or, at most, slightly decrease within-band detectability—and certainly could not account for the result of improved detectability found in the study cited above. Recently, Henn and Turner [J. Acoust. Soc. Am. 88, 126–131 (1990)] were unable to replicate the result described above, concluding that the phenomenon exists only in “limited conditions” and that it is “highly individual” in nature. Further, they speculated that the most likely reason for the discrepancy between their study and previous studies was the selection and/or training of the observers. The present study addressed the effects of the amount of subject training on the finding of Green et al. while controlling the potential effects of stimulus order. Specifically, for a group of three “naive” listeners, thresholds were measured for 3-, 7-, and 21-tone inharmonic complexes as a function of the amount of practice in a mixed-block design. In all cases the group mean thresholds decreased as the number of nonsignal tones increased both initially and after extensive practice for both fixed and roving-level conditions. Thus the effect does not appear to be an artifact of the amount or order of training subjects receive. The possible role of subject sample size and the magnitude of individual differences in obtaining the effect remains an open question. Two hypotheses suggested to account for the improvement in threshold with increasing number of nonsignal tones were evaluated. The hypotheses were represented by simple mathematical models, referred to as the “multiple-comparison” and “pitch-cue” models. The predictions of both models were compared with the results of a series of detection experiments in which the independent variables were the number of nonsignal tones and amount of random, within-trial “amplitude perturbation” [cf. Kidd et al., J. Acoust. Soc. Am. 79, 1045–1053 (1986)] of the nonsignal tones. Neither model, as applied, provided a satisfactory account of the effects of the main variables of number of tones and amount of perturbation. Finally, the results provided qualified support for the explanation offered by Kidd et al. [J. Acoust. Soc. Am. 86, 1310—1317 (1989)] that the lack of improvement in signal threshold with increasing noise bandwidth found in a tone-in-noise paradigm resulted from limits on performance imposed by the spectral variability of the random noise.