• Shuffle
    Toggle On
    Toggle Off
  • Alphabetize
    Toggle On
    Toggle Off
  • Front First
    Toggle On
    Toggle Off
  • Both Sides
    Toggle On
    Toggle Off
  • Read
    Toggle On
    Toggle Off
Reading...
Front

Card Range To Study

through

image

Play button

image

Play button

image

Progress

1/9

Click to flip

Use LEFT and RIGHT arrow keys to navigate between flashcards;

Use UP and DOWN arrow keys to flip the card;

H to show hint;

A reads text to speech;

9 Cards in this Set

  • Front
  • Back

Cortical organization of speech processing (goal)

Distinguishes between speech recognition tasks (which involve lexical access processes) and speech perception tasks (which require listeners to maintain sublexical representations in an active state during the performance of the task and involve some degree of executive control and working memory).


-speech perception = any task involving aurally-presented speech

Ventral stream = sound --> meaning

• Mapping acoustic speech input onto conceptual and semantic representations
• Involves multiple levels of computation and representation (of distinctive features, segments, syllabic structure, phonological word forms, grammatical features, and semantic info
• There must be redundant computational mechanisms (parallel processing) to exploit the multiple, partially redundant spectral and temporal cues in the speech signal


Bilateral ventral organization

-there is evidence for at least one pathway in each hemisphere that can process speech sounds sufficiently well to access the mental lexicon
-bilateral damage to superior temporal lobe regions is associated with severe deficits in speech recognition, consistent with the idea that speech recognition systems are bilaterally organized
• Supported in functional imaging evidence


Dorsal stream = sound --> action

• supports an interface with the motor system
• there must be a neural mechanism that both codes and maintains instances of speech sounds, and can use these sensory traces to guide the tuning of speech gestures so that the sounds are accurately reproduced
• auditory-motor interactions in the acquisitions of ne vocabulary involve generating a sensory representation of the new word that codes the sequence of segments or syllables → this can be used to guide motor articulatory sequences
• new, low-frequency or more complex words might require incremental motor coding and thus more sensory guidance than known, high-frequency or more simple words, which might become more ‘automated’ as motor chunks that require little sensory guidance


superior temporal sulcus

activated by language tasks that require access to phonological info, including both the perception and production of speech and during active maintenance of phonemic info


-Greater activation for dense vs. sparse neighborhoods was found in spoken word recognition in bilaterally in the Superior Temporal Sulcus (Okada & Hickok 2006; but see also Graves et al., 2008) in line with the hypothesis that this region is involved in lexical access.


Temporary anasthesia in hemispheres

-Photos were presented to see if the participant had problems in understanding the sound of the word or the meaning. You would either pick the one that sounds the same or semantically related
-experiment carried out when both hemispheres were injected with anasthesia
-errors more likely to occur after the left anesthesia
-20% of error occurred with semantically related foil (so it's semantic, not phonetic)
-both hemispheres exhibit good capacity for understanding spoken words


Transcranial Magnetic Stimulation

-TMS was applied to brain areas (lips, and tongue)
-TMS is usually spoken of as something that impairs you from carrying out a task but it can also make some tasks easier, and therefore if you apply TMS you might have faster responses. As it happened in this experiment.
-when TMS was applied on the tongue, you have an effect specific for the /t/ and /d/ tongue movement


-Task: discriminate ba/pa (produced with the lips) and da/ta produced with the tongue.



-when applied to lip region, /b/ and /p/ (labial consonants that require lips) were effected
-affecting the auditory perception of the consonants there is a closer interaction between the articulatory network and the perception of consonants


Segmental vs suprasegmental explained by dual-stream

• The processing of segmental information requires a time window of ~20-30 ms, whereas syllabic and prosodic information involves longer intervals (~150-300 ms).


• The dual stream model proposes separate streams processing stimuli at each of these time scales and integration of information at the lexical level.

Prosodic features

Friderici and collaborators contrasted normal speech and delexicalized speech (synthesized speech in which intonation is preserved but words are filtered out). Activation was stronger for normal speech in the left hemisphere (LH) and for delexicalized speech in the right hemisphere (RH).


-The right hemisphere contributes primarily to the perception of prosodic information


--When you contrast activation associated with rhyme and tones, you find that areas that respond to tones that rhyme are in the right hemisphere, suggesting tones are processed specifically by the right hemisphere (makes sense since it’s prosodic info)


-You can vary the consonants by changing the formants
-In parallel, they changed some acoustic features to create the perception that the phonemes were produced by different speakers
-results: areas that respond to speaker variation were observed in the right hemisphere, not the left although both seem to respond to variation in the consonant.
-both hemispheres are important to consonant ID but only the right hemisphere is important for recognizing who is saying something
-R hemisphere  info valuable to prosody (suprasegmental features of speech)