• Shuffle
    Toggle On
    Toggle Off
  • Alphabetize
    Toggle On
    Toggle Off
  • Front First
    Toggle On
    Toggle Off
  • Both Sides
    Toggle On
    Toggle Off
  • Read
    Toggle On
    Toggle Off
Reading...
Front

Card Range To Study

through

image

Play button

image

Play button

image

Progress

1/70

Click to flip

Use LEFT and RIGHT arrow keys to navigate between flashcards;

Use UP and DOWN arrow keys to flip the card;

H to show hint;

A reads text to speech;

70 Cards in this Set

  • Front
  • Back
Tip-of-the-Tongue
gives information concerning the retrieval process.
Cohort Model
shows that retrieval is partially based on word initial syllables.
Lexeme based models
show that retrieval involves more than morphological parsing.
Semantic properties
the components of a word; the representation the word has with a part of the universe.
[animate] [human] [female] [abstract]
Denotation
We can go a step further and distinguish between the true essence of the meaning of a word
Connotation
the creative aspect of word usage by connecting words with more distant semantic properties.
Homophones
sound alike, but are spelled differently and have different meanings
(to, too, two)
Homonyms
spelled and pronounced the same, but have different unrelated meanings
bank (financial institution), bank (edge of the river)
Antonyms
Gradable
(light/dark)
Complementary
(dead/alive)
Relational pairs
(mother/child)
Synonyms
X is a synonym of Y if:
i) X and Y are syntactically identical (part of speech)
I fell asleep on the couch/sofa.
I ran/sprinted past the school.
ii) substitution of Y for X in a declarative sentence doesn’t change its truth conditions
Lois was elated/ecstatic that Superman bought her flowers.
Superman was very/really proud of himself.
Metonyms
thing/concept called by the name of something associated in meaning
(Washington = US govt)
Whole/Part Word Relationships
one is a part of the other
(arm/elbow, house/roof)
Polysemy
words sound alike and are spelled alike, but have different related meanings
book (noun), book (verb)
Heteronym
words are written alike, but have different meanings and different pronunciations
dove (bird), dove (verb)
Semantic Overlap
Lexical items that are related semantically such that one item may be used to clarify or contrast another item.
clarification:
Here is a grain, or granule, of the substance.
contrast:
Here is a grain or, more exactly, granule.
Prototypes
Certain lexical entries better represent their category than others
What makes a lexical entry a prototype?
Retrieval of prototypes is generally faster than non-prototypes.
Rips, Shoben, and Smith (1973):
(1) A cow is an animal.
(2) A cow is a mammal.
(1) has a quicker RT than (2) even though in terms of hierarchy, (1) is a further semantic distance.
Semantic Hierarchy
Within the basic categories we build hierarchies. (more general words and more specific words)
Semantic Networks
How words are related to each other
Varying theories based on relationships between words and truth conditions
Research on Prototypes
Kail and Nippold (1984) showed that in a free recall task, typical members are recalled more quickly than atypical.
Rosch and Mervis (1975) showed that prototypes share more features with other items of the same category while minimizing feature overlap with related categories.
Ex: apple represents fruit better than tomato since the latter overlaps with vegetable.
Apple also minimizes feature overlap due to texture, sweetness, etc.
13 stronger odd number than 23 or 501
Mother a better example of female than waitress.
Frequency Effect
most frequent words are retrieved first
Semantic Network Models
Collins and Quillian (1969) Examined RTs in measuring recognition of truth conditions.
(1) Robin is a robin. (fastest RT, no connections to make; one-
to-one correlation)
(2) Robin is a bird. (next fastest, one connection to make)
(3) Robin is an animal. (next fastest, one connection to make)
(4) Robin is a fish. (Slowest as there are no links.)
Prototypicality Effect
Rips et al (1973) showed that (1) is verified more quickly than (2) even though according to the semantic network, both have the same distance from the word bird.
(1) A robin is a bird.
(2) A penguin is a bird.
Attribution
Conrad (1972) and Wilkins (1971) showed that hierarchical effects do not work across the board.
A robin has a red breast. (attributed solely to robins)
A robin has wings. (attributed all to birds)
A robin has lungs. (attributed all to animals)
Lungs attributed to a greater number of entities
(slowest RT)
Wings attributed to a smaller number of entities (slightly higher RT)
Red breasts are only attributed to Robins (shortest RT)
Relatedness Effect
(1) A pine is a church.
(2) A pine is a flower.
Schaeffer & Wallace (1969, 1970) and Wilkins (1971) did studies to show that we do not reject all untrue statements equally. A faster RT was recorded for (1) than (2) even though both are equally untrue.
The nouns in (2) are more closely related than those in (1) which shows that the more related a items are, the more difficult it is to distinguish between them
Studies have shown that morphological processing must be an interactive process requiring the access of lexical category, as well as semantic and grammatical content.
Miller and Selfridge (1950) Observed subjects’ abilities to memorize lists of words. When words were unrelated, task could not be accomplished very well. When list of words were structures into phrases, task was much more easily accomplished.
Revision of the Semantic Network:
Spreading Activation Model
Collins and Loftus (1975) introduced the concept of ‘spreading activation.’
Words are organized as a web of interconnected nodes.
Each connection can represent:
Categorical relations
Degree of associations between words
Typicality effect
Different ‘nodes’ in the network are labeled with ‘weights’ and spread the activation to other related nodes until an intersection was found.
Verification RTs depended on the proximity of two concepts (nodes).
Many simple units connected in complex ways . Too complex to really test although the basic premise has supported more recent work in lexical retrieval.
Features and the Decompositional Theory
Katz and Fodor (1963) claimed that the meaning of a sentence could be derived by combining the semantic features of the words in the sentence.
This helped explain lexical ambiguity. They showed that in the following sentences, (1) and (2) are syntactically identical thus context had to play an important role in extracting meaning.
(This ties in to a certain degree with theta grid and verb argument structures.)
Katz and Fodor claimed that:
Individual words are broken down into their semantic features.
Features interact with each other the construct meaning.
There were no syntactic cues as (1) and (2) are syntactically similar.
Selection restrictions govern how the features of words interact with each other. e.g.,. kick requires its subject argument to be animate – problem for sentence (3).
Priming
Form priming: prime and target are not related semantically but in phonological form (candle/handle)
Semantic priming: a prime sets the listener up for a word related semantically and then that word becomes easier to find in the lexicon.
Ex: “Yesterday I went to the flower shop and bought…”
And 3 pictures flashed on a screen : a car, a flower, and a pen: a flower chosen
When you hear flower, shop, and buy, you are primed for a small selection of words that are related semantically.
Noun, flower or flower/plant related.
These primes narrow selections.
Priming based on binary features
[+/- sem] semantic transparency/opacity
blueberry primes for blue (vs. strawberry does not prime for straw)
[+/- morph] morphological composition
happiness primes for happy
[+/- phon] phonological form
network primes for net
Short Term Memory
The ability to hold relatively limited amounts of information ready to be used.
Keeps information active and readily accessible.
A finite amount of temporary, limited storage space
Where words are stored while sentence processing is taking place. Miller (1956) showed that humans can remember 5-9 chunks of information at a time.
Is used up more quickly if the processing system is doing too many tasks at once or if one overly demanding task is being performed
Working Memory
There is a limited amount of capacity for processing information available while performing a task.
Characteristics of Working Memory:
The left and the right hemispheres of the brain store information separately
Referred to as the search engine of the brain
Four crucial components
Operates over a matter of seconds
Temporary storage
Manipulation of information
Focuses attention
Phonological loop - sounds
Visuo-spatial – sketchpad
Episodic buffer – holding tank for all information on a fairly short term basis
Phonological Loop
Two main components:
1. Navigates phonological information which enters the short term phonological store  
2. articulatory rehearsal component that can reactivate the memory traces found in the phonological store
Visual Spatial Sketchpad
Stores information accumulated visually.
Shapes, colors, location, speed, etc.
Distinction between visual and spatial: brain damage to one area does not effect the other.
Episodic Buffer Baddeley 2000
In 2000, Baddeley extended the model by adding the episodic buffer, which stores representations or ‘episodes’ that integrate information from 2 slave systems (phonological, visual, and spatial)
Perhaps is also the storage component of information not covered by the slave systems (e.g., semantic, musical, etc.).
Combines all information into a single episodic representation
Integrates information between visual, spatial, and verbal components chronologically, i.e., a story line
Thought to be linked to long-term memory and semantic information.
Amnesiacs who have lost long term memory can still store and retrieve information in short term memory
Long Term Memory
Stores unlimited amounts of information recall:
based on importance of information
memory is good at integrating and synthesizing information, but less able to keep smaller bits of information distinct from each other.
real-world knowledge is used in processing information, inferences are drawn from that knowledge.
Language competence
All knowledge that is not active
Parts Long Term Memory
Episodic memory: the ability to recall personal experiences and events as images; details about past experiences
Semantic memory: the ability to recall personal experiences and events that are meaningful in terms of connections between sources of recurring information which has been learned
Procedural memory:  the ability to remember strategies in task performance as sequential events or as sets of stimulus-responses.
Search Models (Forster, 1976)
Forster’s model used in lexical retrieval of visual and aural stimuli.
Serial comparison
Word retrieval is compared to finding words which are organized in types of lists called ‘files.’
Orthographic (reading)
Phonological (listening)
Syntactic/semantic (language production)
Perceptual input has no direct access to lexical entries rather stimuli are compared with what their representations look/sound like in these lists.
A global perceptual representation of the stimulus/target are constructed
Frequency plays a significant role in this retrieval process (serial position)
TRACE Model
Simulation program of working memory
Lexical Segmentation and Word Recognition
Interactive Activation
Many simple processing units interconnected
Excitatory and inhibitory interactions of a large number of simple processing units
Each unit is connected to others which are continuously activated, updated, re-activated
deals with short segments of real speech
suggests a mechanism for coping with the fact that the cues to the identity of phonemes vary as a function of context.
Shows
How lexical knowledge aids in perception
Accounts for position effects in perception (the prominence of word initial segments)
How lexical knowledge can affect the disambiguation of co-articulation across word boundaries
Predicts
That top-down context permeates the recognition process (not universally supported)
TRACE: Units
Input units
phonological features
phoneme units
words
The input units are ‘activated’
This activation spreads until only one output unit is left activated.
TRACE: Connectivity Pattern
All connections between units are bidirectional (information is constantly flowing in both directions)
Facilitatory connections between units on adjacent levels
Units: feature to phoneme; phoneme to word
Inhibitory connections within levels
Level: feature, phoneme, word levels
TRACE: Excitation
Bottom-Up
Input received
Presented to feature level
Feature excites appropriate phoneme units (e,g,. The [+voice] feature activates all [+voice] phonemes.
Excited phoneme excites every word unit to which it is connected.
Top Down
Word units excite phonemes.
TRACE: Word Activation
The word (or phoneme) which is activated first is recognized as a viable item for retrieval.
Words are identified and segmented simultaneously.
At recognition, word boundaries are identified
Simple Recurrent Network (SRN) Elman
In natural language, building the meaning of individual successive items such as stringing words in a sentence, is often determined by context.
The word 'ball' is interpreted differently in " The countess threw the ball“ and in "The pitcher threw the ball" (Servan-Schreiber et al. 1991)
The SRN provides a model for goal-directed linguistic behavior and planning.
It shows how behaviors are coordinated over long sequences of input-output pairings.
back-propagation network
assumes a feed-forward construction with input, hidden, and output units
permits a hidden layer (context” layer) that receives a single special type of projection (copy-back projection) from another layer containing the same number of units
sending units from the last input pattern processed can be copied onto the receiving units
these become ‘context’ in processing subsequent input units
Process of SRN
Input patterns are transmitted through the hidden unit layer to the output.
Input units  hidden units  output units
The activation of the hidden unit layer is copied back to the context layer and paired with the current input.
Hidden units can influence the processing of subsequent inputs.
Recurrent Neural Network (RNN)
J. Elman
RNN is a  neural network where connections between units form a directed cycle allowing it to exhibit dynamic temporal behavior.
Unlike feed-forward neural networks (SRN), RNNs can use their internal memory to process arbitrary sequences of inputs.
Best known results are seen in tasks such as un-segmented connected handwriting recognition.
Neighborhood Activation Model
Luce (1990)
Similar to the Cohort Theory model in assuming a bottom-up processing of items stored in the memory.
Stimulus activates a neighborhood of acoustic-phonetic patterns in short-term memory.
These patterns activate a system of word decision units that monitor
1. activation levels of acoustic-phonetic patterns to which the units are tuned
2. lexical frequency
3. activity level of the entire system during the process
Auditory word recognition
A stimulus word is identified in the context of phonetically similar words activated in memory.
Stimulus input activates a set of acoustic‐phonetic patterns in memory that must be discriminated and chosen among.
These acoustic‐phonetic patterns receive activation levels proportional to their similarities to the stimulus input.
The activation levels may then be adjusted by information, such as word frequency.
Word decision units monitor the activation levels of their acoustic‐phonetic patterns, and the activity of all other word decisi
NAM Claims
Increasing the number of acoustic‐phonetic patterns activated in memory by the stimulus input will slow processing and reduce identification accuracy.
The effects of word frequency are directly tied to the number of similar words activated in memory
Word frequency is not crucial to the activation levels of the acoustic‐phonetic patterns.
Principles and Parameters
(Chomsky,1995)
There exist certain principles universal to all languages
There exist specific parameters which are language specific and specify what is and is not grammatical
Projection Principle
Lexical items must be syntactically represented at all levels.
This principle mediates between lexical items and how they are mapped to a syntactic structure.
Theta Theory
Theta theory addresses the specific sematic relationship between a verb and the items (or lack thereof) it selects.
Nouns related to a verb are referred to as arguments.
Verbs assign thematic roles (theta roles)to each of their arguments.
Theta roles are assigned in D-structure
Theta Roles
Major theta roles include:
Agent: the initiator of the action of the verb
Experiencer: the experiencer of some type of state expressed by the verb
Theme: the entity which receives the action of the verb
Instrument: the entity by which the action of the verb is carried out
Goal: the direction towards which the action of the verb moves
Source: the direction from which the action originates
Location: the location where the action of the verb takes place
Benefactive: the entity benefitting from the action of the verb
Lexical Categories
Lexical category: a word (noun, verb, adjective, adverbs or preposition)
The head of each phrase is the lexical entry around which the phrase is built.
Heads select what they will combine with.
Complement selection (c-selection):
Phrases: meaningful groupings of words built up from the head. (NP, PP, AP, VP)
We can generalize that the lexical categories V
(verb), N (noun), A (modifier, e.g., adjective, adverb), P (preposition):
a. Subcategorize for, or select their complements.
b. Precede their complements in the phrase.
c. May co-occur with other constituents
Theta Criterion
Each argument is assigned to one and only one theta role.
Each theta role is assigned to one and only one argument.
One-to-one relationship between NPs and their assigned theta role.
Claims of X Bar Theory
Maximal Projection (XP)
node under which all elements of the constituent are mapped
The head (X0)
determines the type of phrase
subcategorizes for all and only its sisters.
subcategorized complements/adjuncts are always phrases.
heads and their maximal projections share features, allowing heads to subcategorize for the heads of their sisters
Specifiers (Spec) are optional
Spec is not subcategorized for
specifiers may be words or phrases
modify all elements over which they have C-command (constituent command)
Complements:
elements required for grammaticality (sisters)
Functional Categories
Based on grammatical relationships.
Functional phrases are needed to express grammatical relationships between words as well as serve as landing sites for transformations/movement.
In the previous trees we have not seen the need for functional heads.
Determiners (Det)
Quantifiers (Q)
Complementizers (CP)
Inflections (IP)
Tense Phrase (TP)
Case Filter (Case Theory)
Case is typically assigned by lexical and functional heads, i.e., V head, I head.
Verbs and prepositions assign Accusative case to their complements.
If a clause is finite[+Tense], then Nominative case is assigned to the subject NP by I.
Lexical Verbs
bring meaning to an utterance
Auxiliary Verb
Show grammatical function (tense/aspect)
Modal Verbs
mood (conditional/subjunctive/future)
Tense
Temporal operators express the relationship between action and time.
present
action or state which occurs:
at the time of the speech act
Carlton carves a canoe. RNN
on a regular basis
Carlton dances with Martha every Saturday night.
in the near future
I arrive from Paris tomorrow afternoon.
past
action or state that occurs before the speech act.
Carlton carved a canoe out of an oak tree.
Future
In English, future is not a grammatical category (tense). Certain modal auxiliaries such as will and shall point to the future, but they do not constitute tense.
Aspect
Aspect operators indicate the temporal state or context of a verb
how time passes
how an action unfolds within a timeframe.
can combine with different tense to give finer distinction.
Aspect distinguishes between actions which are successfully completed once (perfect) and those which are not (imperfect), as well as those which occur over a period of time (progressive).
Modal Operators
Modal operators situate an action or state of being within a context other than temporal.
can, may, could, might
will, would, should, is to, was to
must, ought to, need, has to, had to
Parsing
Parsing is the process of computing the syntactic structure of a sentence to gain its meaning.
Parser: the element in the brain which constructs and interprets phrases, creates with them complex structures, and can cause transformations or movement.
Understanding parsing mechanisms is crucial for understanding not only typical human speech but also that of impaired speakers/listeners.
Canonical sentence strategy – we rely on the canonical sentence structure as the main cue for parsing. (English is SVO)
Syntax/Semantic Interface
Slobin (1966a) worked with reversible and irreversible passives.
1.reversible
Ex: The ghost was chased by the robot.
The robot was chased by the ghost.

‘ghost’ and ‘robot’ share semantic properties which make this process ‘work.’

2.Irreversible
Ex: The flowers were watered by the robot.
*The robot was watered by the flowers.
This showed that more than simply syntactic information is used in parsing.
Methods of Parsing
Several methods were developed to explain how parsing takes place.
Autonomous
Information is used to construct a syntactic representation, i.e. simply word order
Interactive
Several sources of information are used to parse a sentence (semantic information of a word, context, etc.)
One-stage
Syntactic and semantic information are used together to construct the syntactic structure of a sentence.
Two-stage
1.Syntactic information alone is used (autonomous model)
2.Semantic information is used.
Clause boundary recognition
Fodor and Bever (1965)
In her hopes of marrying Anna was surely impractical.
Your hope of marrying Anna was surely impractical.
Click inserted in An-na. In first sentence subjects heard click before Anna. In second sentence click was heard after Anna.
Reber and Anderson (1970) ran a similar experiment where there no clicks but said there were.
Subjects claimed that they thought they heard the clicks at phrase boundaries.Subsequent studies have showed that parsing is incremental. Subjects do not wait to hear an entire clause before deciding where to place boundaries. They hear pieces of information, anticipate what may be coming and place boundaries at the smallest increments of word clusters.
Anticipation is based on the verb.
What would you anticipate following the verb ‘eat’ in a sentence such as, “Mary will eat…..”
So we see that parsing is a process involving syntactic and semantic properties.Mehler (1963) found that when subjects forgot sentences, they tended to reconstruc
Verbal Structure
We have looked at 4 verbs types in terms of structural requirements for well-formedness:
Transitive VP --> NP (XP)
Intransitive VP --> nothing
Ditransitive VP --> NP PP
Copula VP --> (NP) (PP) (AP)
Unaccusatives
the verb "fell"
Unergatives
The agent is carrying out the action of the verb however the requirements of the verb is such that no other arguments are permitted.
Cry, laugh, run, etc.
Case marking in Poor English
Nominative: I, you,he, she, it, we, you, they
Accusative: me, you, him, her, it, us, you, them
Genitive: possessive
Case Assignment
Case is assigned at surface structure.
NPs will move to satisfy the Case filter.
Subject is assigned Nominative case
He melted the ice. (Nom.)
An NP within an NP showing possession is assigned Genitive case
Mary’s ice melted. (Gen.)
Object and everything else takes Accusative case.
Mary melted the ice. (Acc.)
The ice was melted in the sunlight.
(Acc. in VP) (Acc. in PP)