How does the auditory system integrate information over time and frequency to get a coherent pitch percept?
How does the auditory system integrate information over time and frequency to get a coherent pitch percept?
In our daily lives, most of the sounds we encounter are harmonic complex tones. Such tones consist of several pure tone components, all of which can be expressed as some integer n multiplied by some greatest common factor, which we call the fundamental frequency (F0). Notably, a component at the F0 itself is not necessary to evoke a pitch percept at that frequency, and in the case of synchronous component presentation, we refer to this as residual pitch. However, one can elicit a similar percept via short presentations of the components individually, even with gaps of silence between successive harmonics, assuming a sufficient background noise presence; in this case of sequential component presentation, we refer to the percept as virtual pitch. Currently, our research aims to elucidate how spectral and temporal parameters interact to perceive pitch when confronted by “clouds” of complex tone components, particularly in adverse listening conditions and in the presence of multiple complex tones.