• Shuffle
    Toggle On
    Toggle Off
  • Alphabetize
    Toggle On
    Toggle Off
  • Front First
    Toggle On
    Toggle Off
  • Both Sides
    Toggle On
    Toggle Off
  • Read
    Toggle On
    Toggle Off
Reading...
Front

Card Range To Study

through

image

Play button

image

Play button

image

Progress

1/68

Click to flip

Use LEFT and RIGHT arrow keys to navigate between flashcards;

Use UP and DOWN arrow keys to flip the card;

H to show hint;

A reads text to speech;

68 Cards in this Set

  • Front
  • Back

Phonemic restoration

Sounds missing from a speech signal can be restored by the brain and may appear to be heard. Happens automatically. Nothing you can do against it.

Language consists of two components

Lexicon: part of long-term memory that carries Information about words, their components and their meaning.


Grammar: rules to put things in the lexicon together.

Words are packages of...

Form


- sound (phonological code or phonetic)


- spelling, the way They look (orthographic code)



The meaning of words (semantic coding system)

Structure of words

Phonetic feature (place and manner of articulation)


Phoneme (the phonetic features combined)


Bigrams (pairs of phonemes) or trigrams (triplets)


Syllables


Morpheme (more speech sounds combined to get the smallest unit of language that can convey meaning)


Word (one or more morphemes combined)

Free vs. bound morphemes

Free morphemes can appear as a word by themselves (“cat”. Monomorphemic words?)


Bound morphemes cannot appear as a word by themselves. (“S” in “dogs”. Polymorphemic words?)

Bound morphemes:

Inflectional


Does not change grammatical category


Dog -> dogs



Derivational


Changes grammatical category


Bake -> baker

Base word

Carries the most significant aspects of semantic content and cannot be reduced into smaller constituents

Associations arise from words regularly occurring together


Semantic relations arise from shared contexts and higher-level relations

.

Lexical decision task

One of the most often used tasks in psychological research


Participants see stimuli on a screen and have to decide whether to stimuli are words (they know) or not

HAL and LSA

Two models of semantic memory that place an emphasis on pure association. They say that a word’s meaning is determined by the words that it appears with. If two words appear together more than they appear with other words, then the meanings of those two words are highly related. Both have been successfully proven in experiments.


Advantage of these theories:


Since they look at co-occurrence they avoid some of the problems associated with the feature-based approach to word meaning.

Lexical semantics

Core features. How is the meaning of a word represented in the mind? Necessary or core characteristics.

Fixed vs. Fuzzy Meanings

Fixed meanings there is a basic meaning for each word. Checklist theory: sufficient and necessary features.


Fuzzy meanings word meanings are fluid. Can vary according to context or conditions.


Family resemblance syndrome: things which could be thought to be connected by one essential common feature may in fact be connected by a series of overlapping similarities, where no one feature is common to all of the things.


Word meanings have some sort of prototypes: when people give example for bird they are likely to say “pigeon”.

Problems for the “definition” hypothesis of lexical semantics.

- some words do not have consistent features. E.g., “game”


- meanings are not equally good across different contexts (e.g., “red hair” is a worse example of “red” than “fire engine red”)

Semantics network approach

Whatever comes to mind when someone says the word. The goal of semantic network theory is to explain how word meanings are encoded in the mental lexicon and to explain certain patterns of behavior that people exhibit when responding to words. We store all kinds of meaning to words, not only storing but also connecting to each other.

Semantic network theory

- Each concept in semantic memory is represented by a node, a point or location in “semantic space”


- nodes are linked together by pathways: directional associations between concepts. Every concept is related to every other concept.


- spreading activation: activating more concepts at a time


- activation continues to spread through the network, but the level of activation decreases with each “step”


- spreading activation is automatic: fast and outside conscious control


- limited amount of activation prevents uncontrolled spread of activation


- Word meaning = pattern of activity in the network


- can account for word associations


- can account for semantic priming


Meaning of a word is captured by the pattern of activated nodes and links that form a connection

Spreading activation

Activity at one node causes activation at other nodes via links.


- spreading activation is automatic: fast and outside conscious control.


- spreading activation decreases the further it travels in the network (proven by mediated priming)


Those two properties help explain how people respond during priming tasks

Priming

When presenting one stimulus at 1 time helps people respond to another stimulus at time 2.


Evidence for semantic priming:


- lexical decision experiments


- naming experiments

Connectivity

Reflects how many words are associated with a specific target word, and how many connections are shared between that set of words.


Low connectivity & high connectivity

Mediated priming studies

Is evidence for: “spreading activation is thought to diminish substantially beyond one or two links in the network”


Lion-stripes -> tiger.


Nodes directly connected to the prime word are strongly activated but less directly connected nodes are less strongly activated with activation diminishing with increasing distance in the network.

Symbol grounding problem

Some scientists not comfortable with the idea that meaning depends on co-occurrence and association. They say that these are merely mapping’s between symbols and for it to have meaning these symbols need to be grounded in some set of representations. Argument for this:


- Chinese room: you can respond to symbols but to you those symbols have no meaning. Need to be grounded in something other than more symbols to have meaning.


Solve this by:


- embodied semantics approach to meaning: words do not just activate patterns of abstract symbols but perceptual experiences with real-world objects (indexical hypothesis). Evidence: responses to a word are sped up when people’s hands were shaped as if they use the object. (Symbols are tied to representations outside the linguistic system)


Disembodied semantics approach to meaning sees mental simulation as a separate process and the results of a kind of spreading activation.

Lexical access

Process of identifying words and recovering word-related information from long-term memory.


1. Lexical access is fast


Evidence:


Shadowing task: repeating the words you hear with headphones on for example


Word monitoring: involves listening to utterances and responding as quickly as possible when a specific target word appears in the input. Press a button when you hear a particular word in a sentence


Gating task: involves listening to short snippets of the beginnings of words. Increments until you can identify it.


Errors are not random: they fit the syntactic and semantic context. They replaced or added words the substitute word has some type of same meaning / grammar


2. Lexical access is incremental (stapgewijs, oplopend): the speech term is segmented into words. Don’t need to hear the word completely before you start accessing it.

Models of lexical access

First generation: Logogen & FOBS


Second generation: TRACE & COHORT


Third generation: distributed cohort model and the simple recurrent network approach



Common goal: explain how people take inputs from the auditory or the visual system and match those inputs to stored representations of word form

Important choices that differ between models

Flow of information between levels


Interactivity


Parallel processing or serial processing

Logogen

A device which accepts information from the sensory analysis mechanism concerning the properties of linguistic stimuli and from context producing mechanism

LOGOGEN MODEL

Bottom-up processing system, from simpler to more difficult.


The model is a bottom-up driven system that tales spoken/visual input and uses it to activate previously stored word form representation.



2 key assumptions:


- information flow is strictly bottom-up. Auditory and visual processing units affect the activation of Logogen but not the other way around.


- there are no direct connections between and among the Logogens themselves.


- semantic associates raise a logogen’s activation level.


- accounts for context/semantic priming effects.


- repeated access lowers the logogen’s threshold


- accounts for frequency effects

FOBS MODEL (not good)

Word form representations were activated by bottom-up input from the auditory system.


- auditory cues drive memory search


- input is compared to the highest frequency bin


- if it matches, the process stops (self-terminating). If it doesn’t match, the input is compared to the next bin.


Bins are organized according to roots (morphemes). Your memory organizes words into bins, first most frequent bin and so on until words that are not familiar. (Dog, dogs, dogpile)


^ accounts for root vs. surface frequency effects.



Self-terminating part is not true otherwise you do not understand multiple words at the same time.



Example blackboard, FOBS proposes that “Black” is the root, because speech processing gives priority to Information coming first, and in speech, we hear the morpheme black before we hear the morpheme board.

TRACE (good)

- highly interactive instead of bottom-up


- predicts the words superioriteit effect (people are more accurate in responding whether they saw the letter “k” in “work” than in “owrk” after briefly viewing either words.


- good at degraded input


-highly interactive system and competition


-bottom-up input, top-down feedback


- word recognition unfolds over time (cascaded activation)


Evidence of competition: “click on Candy” -> participants fixate both on the picture of a Candy and of a candle.


- features map onto letters


If the letter /k/ is Made active, then a word like “tilt” will get negative feedback because it does not contain that letter.


- three levels:


Auditory feautures


Phonemes


Words


- embedded words are also made active. This does only the tracé model. Cohort not.


- evidence top-down effects: when someone coughs.


- lateral inhibition within layers.

COHORT MODEL

- not interactive. Higher levels do not interact with lower.


- developed specifically for spoken words.


- 3 phases:


Activation


Selection


Integration



Advantage of the model: predicts when a word can be recognized. Recognition point depends on reducing the set of activated words that do not match the context. Evidence for this: cross-modal experiments.


The model is far from perfect because it does not understand mispronounciation. You assume that you know where words begin and end.

Both models of lexical access are highly incremental which means that you start processing before person is done speaking

-

Cohort vs trace

TRACE predicts increased competition with increased cohort size. Cohort does not.


Cohort size does not affect non-word recognition speed.


Suggesting that memory search is parallel and activated candidates do not compete.

Cohort vs Tracé part 2

Activation of word forms:


Trace: views word form activation as resulting from a process of competition and mutual inhibition. Words steal activation from each other.


Cohort: views word form activation as reflecting a massively parallel process without competition until the selection phase. Words do not steal activation from eachother.


Effect of similarity between stimulus and stored word form:


Trace: relies on global similarity to mispronounciation therefore does not have a great impact. The overall stimulus must be close to the stored representation.


Cohort: relies more on onset-match; therefore onset primes should be more effective than offset primes. Word onsets are critical. Mismatches at the beginnings of words should have greater affects than at the ends.


“Bone” primes “bold” more than it primes “pone”

Syntax

Sentence structure; the cues that language provides that show how words in sentences relatie to one another (position, functional words, syntactic morphemes); it is greatly helped by prosody

Syntactic parsing

Use cues to interprete a sentence. We build a structure tree / hierarchical structures in our head to understand the sentence we’re Reading.


This goes wrong when we read ambiguous sentences. We need to undo the original structure and build a new one.

Common research methods for sentence processing

Eye tracking


Self-paced reading

Phase structure, a sentence is a noun phrase and a verb phrase

S -> NP + VP

Common research methods for sentence processing

Eye tracking


Self-paced reading

Phase structure, a sentence is a noun phrase and a verb phrase

S -> NP + VP

Ambiguity leads to

Longer reading time, lower comprehension accuracy and different patterns of brain activity in comprehenders than unambiguous sentences that say the same thing

Globally vs Temporally ambiguous sentences

Globally ambiguous sentences: if the entire sentence has two structures. Different ways of organizing the sentence are all consistent with the grammar of the language.


We painted the wall with cracks


Temporally/Locally ambiguous sentences: if there is a point in te sentence where two structures are possible, but the full sentence only has one possible structure that is grammatically licensed or has an acceptable structure.


The old man the boat


Where people slow down is to recognize where the Verb in a sentence plays a role

Common research methods for sentence processing

Eye tracking


Self-paced reading

Phase structure, a sentence is a noun phrase and a verb phrase

S -> NP + VP

Ambiguity leads to

Longer reading time


Lower comprehension accuracy


Different patterns of brain activity

Globally vs Temporally ambiguous sentences

Globally ambiguous sentences: if the entire sentence has two structures. Different ways of organizing the sentence are all consistent with the grammar of the language.


We painted the wall with cracks


Temporally/Locally ambiguous sentences: if there is a point in te sentence where two structures are possible, but the full sentence only has one possible structure that is grammatically licensed or has an acceptable structure.


The old man the boat


Where people slow down is to recognize where the Verb in a sentence plays a role

Listeners and readers follow the immediacy principle and use an incremental processing strategy

Immediacy principle: people do as much interpretive work as they can, based on partial Information and making possibly incorrect assumptions rather than waiting until They have all information They need to be certain of making the correct decision. Explains why we process ambiguous sentences often in the wrong way.

Syntactic parser

A mechanism that carries out processes that identify relationships between words in sentences

G

G

FRAZIER: two-stage models. (Eg., Garden-Path Model)

Garden Path Sentence: grammatically correct sentence that starts in such a way that a reader’s most likely interpretation will be incorrect.


Stage 1 - incoming sequence of words is analyzed to determine what categories the words belong to and build initial syntax structure. Lexical processor identifies the categories that are represented.


Once syntactic structure had been built, actual words in the sentence can be assigned positions in the tree and the entire configuration can be sent to a thematic interpreter. His job is to apply a set of rules to each of the elements in the tree, based on their position in the tree and how they are connected to other words.


Stage 2 - assess outcome against context, semantic plausibility, real-world knowledge, interpretation. Standard meaning is computed by applying semantic rules to the structured input.


Revise if necessary


Model assumes building a structure starts as soon as the lexical processor begins to deliver Information about word categories. Model can only build 1 at a time and use only word category info


Advantage: it makes specific claims so it is testable and potentially falsifiable


Predicts that sentences which would give the wrong interpretation in case of late closure are harder to proces. <- prediction confirmed in numerous experiments

Assumptions of Garden-Path Theory

Incrementality: word-by-word parsing.


Serial processing: one structure at a time.


Simplicity: no unnecessary structure; build the least complex representation.

Heuristics of GP

Late closure: if possible, continue to work on the same phrase or clause as long as possible.


Minimal attachment: when more than one structure is possible, build the structure with the fewest nodes.


Main assertion: when more than one structure is possible, build the structure where the new elements relate to the main assertion of the sentence. When main assertion is not there, late closure dominates.

Constraint-based models

-sentences are represented as patterns of activation in a neural network


-parsers build or activate all possible structures simultaneously


-structures compete for activation


-interpretations are ranked


-most likely structure gets highest activation


-the parser uses many sources of Information to compute likelihood, including syntactic, semantic, discourse, and frequency-based


-syntactic ambiguity resolved by competition


-uses more than just word category Information. Also uses Information of words, example subcategory preference Information. It takes Information of the past to influence the future


-one stage model: lexical, syntactic, and semantic processes are all viewed as taking place simultaneously


Evidence: experimental outcomes support the idea that frequent structures are easier to process than non-frequent


Criticism: no evidence that simple syntactic structures are never hard to process

Informationsources for constraint

Story Context Effects


Subcategory Frequency Effects


Cross-Linguistic Frequency Data


Semantic Effects / Word Meaning


Prosody / Prosodic Cues


Visual Context Effects

Story Context Effects

When doesn’t meet semantical expectations, can be quite hard. More explanatory/supporting sentences can help.


Referential context account: if you have a choice between structures, build the structure that is most consistent with your current semantic assumption.

Informationsources for constraint

Story Context Effects


Subcategory Frequency Effects


Cross-Linguistic Frequency Data


Semantic Effects / Word Meaning


Prosody / Prosodic Cues


Visual Context Effects

Story Context Effects

When doesn’t meet semantical expectations, can be quite hard. More explanatory/supporting sentences can help.


Referential context account: if you have a choice between structures, build the structure that is most consistent with your current semantic assumption.

Subcategory Frequency Effects

Verbs sometimes need partners, sometimes verbs are more flexible and don’t need partners.


John read


John read a story (direct object)


John read a story to his daughter (complement sentence)


Garden-path meer gefocust op direct-object. Constraint hangt er vanaf wat meest wordt verwacht.

Semantic Effects / Word Meaning

The defendant examined by the lawyer turned out to be reliable


This parsing difficulty is triggered by lexical ambiguity. “Examined” can be both past tense and past participle. Language users cannot use word form as a cue to activate either the active or passive interpretation.


Let’s assume that readers consider only two possible structures:


- Main Verb


- Reduce Relative


Frequency: MV structures are about 12 times more common than RR structures in English -> higher initial activation of MV interpretation


Animacy: defendant is animate, therefore suitable as an agent of the Verb. Increased activation for MV reading


Thematic Information: is defendant good agent? -> increased MV activation.


Is defendant good patiënt? -> increased RR activation.


Combinatorial constraint: requires combination of semantic properties of multiple words (noun and Verb)


Discourse constraints: the words following the defendant might first tell us which is the defendants speaker, based on previous info -> increased RR


Grain Size Problem: in English, verbs are followed by a direct object more often than by a sentence complement. Prefer direct object interpretation. But depends.

Informationsources for constraint

Story Context Effects


Subcategory Frequency Effects


Cross-Linguistic Frequency Data


Semantic Effects / Word Meaning


Prosody / Prosodic Cues


Visual Context Effects

Story Context Effects

When doesn’t meet semantical expectations, can be quite hard. More explanatory/supporting sentences can help.


Referential context account: if you have a choice between structures, build the structure that is most consistent with your current semantic assumption.

Subcategory Frequency Effects

Verbs sometimes need partners, sometimes verbs are more flexible and don’t need partners.


John read


John read a story (direct object)


John read a story to his daughter (complement sentence)


Garden-path meer gefocust op direct-object. Constraint hangt er vanaf wat meest wordt verwacht (likely vs. Non-likely)

Semantic Effects / Word Meaning

The defendant examined by the lawyer turned out to be reliable


This parsing difficulty is triggered by lexical ambiguity. “Examined” can be both past tense and past participle. Language users cannot use word form as a cue to activate either the active or passive interpretation.


Let’s assume that readers consider only two possible structures:


- Main Verb


- Reduce Relative


Frequency: MV structures are about 12 times more common than RR structures in English -> higher initial activation of MV interpretation


Animacy: defendant is animate, therefore suitable as an agent of the Verb. Increased activation for MV reading


Thematic Information: is defendant good agent? -> increased MV activation.


Is defendant good patiënt? -> increased RR activation.


Combinatorial constraint: requires combination of semantic properties of multiple words (noun and Verb)


Discourse constraints: the words following the defendant might first tell us which is the defendants speaker, based on previous info -> increased RR


Grain Size Problem: in English, verbs are followed by a direct object more often than by a sentence complement. Prefer direct object interpretation. But depends.

Prosody / Prosodic cues

When Roger leaves the house is/it’s dark


-non-linguistic prosody: provide cues to the speaker’s general mind stage. Tone and tempo of speaker will differ depending on how speaker feels at that moment.


-linguistic Prosody: provide cues to how the words are organized into phrases and clauses. For example between statements and questions.

Visual context Effects

Can supply referential support for the modifier interpretation. Visual context can help how to interpret.


Put the Apple on the towel in the box


Visual context immediately influences syntactic parsing; evidence against informational encapsulation.


This is also a pragmatic constraint: the listener assumes that the speaker gives just the right amount of Information to perform the correct action.

Argument structure hypothesis

Verbs have argument structures. Some have 0 arguments. Some zelfs 4.


Possible sollutions:


Store only argument-related structural Information


Different structures are activated to the degree that They have occured in the past.


If so:


Between 1 and 5 structures are stored for each Verb and other syntactic structures are generated “on the fly”


If so, people should respond differently to arguments than adjuncts

Model: CONSTRUAL

Refinement of GP. Retains the idea that parsing occurs in discrete stages, but adopts the idea that context can influence which structure the parser prefers and the idea that structures can sometimes be build simultaneously.


- differs from constraint in that there are a limited set of circumstances under which the parser will respond to contextual Information or build syntactic structures in parallel.


- uses late closure & main assertion in definite decisions


Primary relations (preferred): correspond roughly to argument relations as defined above


Non-primary: to anything else


- For non-primary relations, multiple structures are built in parallel.


- incoming words are affiliated with preceding context.


- all available Information is used to evaluate the quality of the different structures.


- applies to relative clause attachment ambiguities


The daughter of the colonel who had a black dress


The daughter of the colonel who had a black moustache


- late closure favors attachment to the second noun. So Mustache should be easier to predict than dress, but as predicted by construal, both are equally easy

Model - RACE BASED PARSING

The parser builds multiple structures in parallel


Structures do not inhibit one another - They race against each other but do not take away activation from alternatives. They increase or decrease their activation based on the available cues in the input.


The first structure to rise above an activation thresold is selected and interpreted (taken to represent the input and that structure is used as the basis for semantic interpretation)


If the “winner” produces a bad interpretation, an alternative structure is evaluated.

MODEL - GOOD ENOUGH PARSING

Parsing and syntax may not always be necessary. Processing of sentences is relatively shallow. A reader will make an assumption and fail to interpret it until further processing of that Information is necessary.


- especially when redundant with lexical Information (eg. Mouse, cheese, eat)


- predicts parsing errors when lexical Information contradicts structural Information. (The mouse was eaten by the cheese)


- comprehenders set a thresold for understanding.


- They build structures that do not follow the common rules but give enough Information to understand the meaning.

Done

Done

Done

Done