The various competing sound present in our environment is being constantly untangled by our brain, and attributed into distinct perceptual streams analogous to the distinct sound sources in listeners’ environment (Bregman, 1990; Moore & Gockel, 2002; Carlyon, 2004). The process with which the nervous system makes sense of complex patterns of acoustic stimulation is called auditory scene analysis (Bregman, 1990). Thus, segregating the sounds arising from various sources in the environment, leading to an internal representation of auditory streams becomes the goal of auditory scene analysis. Stream segregation refers to the process by which sounds from one source are grouped, while others are segregated (Bregman, 1990). Grouping …show more content…
Orsolya szalárdy et al, 2013 stated that the primary cue for the stream segregation lies in the difference in amplitude modulation which further interacts with other primary cues such as frequency and location difference. Separation in the AM frequency is used as a cue in the segregation of auditory stream. As the separation in the amplitude modulation increased, the duration of segregate perceptual phases increased, thereby decreasing the duration of integrated phases. A study done by Vliegen, Moore & Oxenham in 1999 suggests spectral information of the sound source plays a dominant role in the process of segregation where the periodicity information can also have some influence. Variations in the temporal fluctuation rate can form the sole basis of segregation of sequential sounds in the absence of temporal or spectral cues (Grimault, Bacon & Micheyl, 2002). Studies were done to interpret the role of visual cues that helped in easing the process of segregation of melody from distracter notes among musicians and non-musicians (Marozeau et al, 2010). The findings obtained are in correlation with the theories that assign an important role for central processes in auditory streaming mechanism, and puts forward the …show more content…
Generally, the detection of signal in a fluctuating background noise becomes effortless when compared to a signal in a steady background noise, notably when the signal frequency is different from that of the center frequency of masker. This can be attributed to the ability to “listen in the dips” of the fluctuating background noise. Experiments done by Schooneveldt and Moore (1987) and Moore and Glasberg (1987) supports the fact that dip listening is partly dependent on the cues available from temporal fine structure.
NEED:
When concurrent vowel identification experiment with the addition of fine structure information was carried out using a behavioral task, the results did not indicate any significant improvement in the performance when fine structure was added.
The present study focusses on adding the electrophysiological correlates with the behavioral study to indicate an improvement in performance, if any, because of the addition of temporal fine structure. AIM:
To investigate the physiological coding of concurrent vowels using Frequency Following Response (FFR) and Object Related Negativity