• Shuffle
    Toggle On
    Toggle Off
  • Alphabetize
    Toggle On
    Toggle Off
  • Front First
    Toggle On
    Toggle Off
  • Both Sides
    Toggle On
    Toggle Off
  • Read
    Toggle On
    Toggle Off
Reading...
Front

Card Range To Study

through

image

Play button

image

Play button

image

Progress

1/73

Click to flip

Use LEFT and RIGHT arrow keys to navigate between flashcards;

Use UP and DOWN arrow keys to flip the card;

H to show hint;

A reads text to speech;

73 Cards in this Set

  • Front
  • Back
Empiricists
Knowledge comes from an individual’s own experience
•Depends on experience and learning
•Recognize individual differences in genetics but emphasize that human nature is malleable
•People are the way they are, and have the capabilities they have because of previous learning
•Learning takes place through mental association of ideas (Locke)
•Environment plays a role in determining abilities
Nativists
Role of native ability is emphasized over role of learning in the acquisition of abilities...
•Example: short term memory is due to innate structures of the human mind present at birth and is not learned by experience
•Emphasized genetics and individual differences
Functionalists
(William James, Dewey, Thorndike): Why does the mind work the way it does?
•Assumed the way the mind works has a great deal to do with its function
•The most important thing the mind does is to let the individual adapt to the environment
•Functionalism drew heavily on Darwin’s theory
•Tried to extend biological conceptions of adaptation to psychological phenomena
•Focused on why the mind works and its purposes- habits
•Believed in studying mental phenomena in real life situations
Structuralists
(Wundt): defined the simplest unit of mind thru introspection
•What are the elemental components of the mind?
•Wundt, 1879, founded the first institute for research in experimental psychology
•Structuralists believed that any conscious thought/idea was a result of a combination of senses defined by 4 properties: mode (visual, auditory, tactile), quality (color, shape), intensity, & duration
•Believed in a lab setting to study the true nature of the mind
Behaviorists
(Pavlov, Thorndike, Watson, Skinner, Tolman): rejected study of introspection, observable behavior systematically studied
•Focused on the scientific study of behavior and banished unobservable mental states/subjective process (ex. Believing, hoping)
•Watson believed the scientific study of mental phenomena was not possible
•Watson= little Albert, classical conditioning
•Skinner= operant conditioning, reward
•Skinner believed images/thoughts were proper subjects of study but he objected to treating mental events/activities as fundamentally different from behavioral events/activities
•Skinner believed images and thoughts were more or less verbal labels for bodily processes, objected to hypothesizing the existence of mental representations (internal depictions of info)
•Mental events are triggered by external environmental stimuli and give rise to behaviors
•Tolman demonstrated that animals have expectations and internal representations that guide behavior
•Tolman= worked with animals, interested in latent learning & the mental maps animals created
Gestaltists
(Koffka, Kohler, Wertheimer)
•Psychological phenomena could not be reduced to simple elements but had to be studied in entirety
•Studied how people use structure & order on their experiences
•Studied people’s subjective experience to stimuli
•The mind imposes its own structure & organization on stimuli, organizes perceptions into wholes rather than discrete parts, the wholes tend to simplify stimuli
•The whole is more than the sum of parts
What were key foci of Piaget? Galton? Chomsky?
I.Piaget: focused on genetic epistemology development influences cognition
•Children in different stages of cognitive development use different mental structures to perceive/think about the world

II.Galton: focused on individual differences
•Measured intellectual ability
•Studied ability of mental imagery
•Invented tests & questionnaires to assess mental abilities

III. Chomsky: focused on linguistics (the study of language), made clear that people routinely process enormously complex info
•Chomsky’s work showed that behaviorism could not adequately explain language
•Revolutionized linguistics and the importance of studying how people acquire, understand, & produce language
What perspectives within psychology and events outside of psychology contributed to the cognitive revolution & the development of the interdisciplinary field of cognitive science?
•The cognitive revolution was the rejection of the behaviorist assumption that mental states were beyond the realm of scientific study
•Revolutionaries believed no complete explanation of a person’s functioning could exist that did not refer to the person’s mental representations of the world
•Perspective within psychology: challenging the fundamentals of radical behaviorism, which included the concept that mental representations were not needed to explain behavior
•The cognitive revolution was an outgrowth of:
•Rejection of Behaviorism
•Human factors…necessitated by WWII and communications engineering, increased reliance on computers & technology
•Linguistics paradigm shift: Chomsky said that language is innate and different from reinforcement
•Computers & artificial intelligence
Information Processing Approach
•Cognition is assumed to occur serially (in discrete stages)
•Analogy between human cognition & computerized processing of info
•Cognition can be thought of as info passing through a system (aka mind)
•People’s cognitive abilities can be thought of as systems of inter-related capacities
•People, like computers, perform cognitive feats by applying a few mental operations to symbols
•Some assumptions, people are:
•General-purpose symbol manipulators
•Info is processed in stages with multiple stores
•Info is dealt with serially
•Info processing is rooted in structuralism because its followers attempt to identify the basic capacities and processes we use in cognition
•Uses experimental & quasi experimental techniques
•Problems: overly simplistic, too reductionistic in the effort to control things, not always compatible with brain data
Connectionist Approach (also known as parallel distributed processing (PDP) or neural networks)
•Competitive with info processing
•Each unit is connected to other units in a large network
•There is a pattern of excitation/inhibition in a network of connections among simple units that operate in parallel (many cognitive processes happen at the same time), no central processor
•Knowledge is not stored in various storehouses but within connections between units
•Feldman & Ballard argue this is more consistent with the way the brain functions compared to the info processing approach
•Connectionist models are more concerned with how cog processes can actually be carried out by the brain
•PDP: computer programs model the brain’s activity
•PDP: Use simple units (likened to our body’s ‘neurons’) which are connected in parallel to other units
•Units represent sentences, letters, words, etc
•Knowledge occurs in connections between units
•One focus is to break down higher level information (sentences) to lower level info (straight/curve lines) or alternatively, integrate lower info into higher level info
Evolutionary Approach
•How a cognitive process has been shaped by environmental pressure over a long time period
•Idea that humans have specialized areas of competence produced by our evolutionary heritage
•What cognitive operations and approaches are most effective?
•Operates via natural selection to maintain mechanisms that optimize environmental conditions
Ecological Approach
•Overlaps the most with the evolutionary approach
•Cognition is shaped by culture & context in which activities occur
•How noncognitive functions influence cognition in the real world
•Emphasis on the context in which cognitive processes occur, & how social/motivational factors influence
Natural Observation
•Observer watches people in familiar, everyday contexts
•Advantages: real world occurrence, has ecological validity, relatively easy to do, typically doesn’t require a lot of resources to carry out, & does not require other people to formally volunteer for a study
•Disadvantages: lack of experimental control, cannot isolate the cause of different behaviors, observer may bring biases to the study and limit/distort the recordings made
•Naturalistic Observation—ecological validity at its best!
•Key limitations are lack of experimental control & researchers’ initial plans
Clinical Interviews
ask participants open-ended questions, follow up with more questions

•Depending on the participant’s responses, the interviewer follows up with one of many possible lines of questioning to try and focus on participant’s own thinking/experience
•Advantage: gives interviewer a little more influence over the setting in which the observations are conducted
Controlled Observation
•Advantage: gives interviewer a little more influence over the setting in which the observations are conducted
•Standardize setting
•Manipulate specific conditions to see how participants are affected
Experiments
manipulate IVs and observe how DVs change

•Advantage: having experimental control means that experimenter can assign participants to different conditions to minimize preexisting differences between them
•Isolate causal factors and allows us to draw conclusions
•Key concern is do they generalize?
•Disadvantages: May fail to fully capture real world phenomena in the research design, lab setting may prevent participants from behaving normally
Why are experiments preferred?
Random assignment is important for making claims about causality

•Between subjects design: different participants are assigned to different conditions and the researcher looks for differences in performances between the two groups
•Within subjects design: the same participants are in more than one condition
Key Elements in Signal Detection Theory
•Signal detection theory can be used to detect important stimuli in war, transportation, & communications technologies.
•Signal detection theory allows researchers to tease apart the sensitivity that the participant has in discriminating between signal and signal plus noise distributions (reflected by changes in d prime) and any bias that the individual may bring into the decision-making situation (reflected by changes in Beta).
Purpose of Chronometric Studies
•Chronometric studies use subtractive methods when creating their models/dependent variables. The key assumption is called pure insertion and deletion, which states that different cognitive tasks can be added or subtracted without affecting other cognitive tasks.
Advantages of Cascade Models
•Cascade models are better for approximating cognition because they include the assumption that mental processes can flow and occur at the same time, and across many stages.
•Additive factors logic is limited because it holds true if you assume a discrete serial stage model of processing info (the output of 1 stage is not passed onto the next stage until the first stage is complete).
Hindbrain
*develops from neural tube!

•Medulla oblongata: regulates life support functions such as blood pressure, heart rate, respiration, sneezing, coughing, vomiting; transmits info from spinal cord to brain
•Pons: act as a neural relay center, facilitates crossover of info between the left side of the body & right side of the brain, vice versa. Involved in balance, & processing visual & auditory info. Yoder question: Pons are involved in arousal & consciousness, sleep, & automatic body functions
•Cerebellum: coordinates muscular activity, balance, motor behavior, & coordination. Implicated in the ability to shift attention between visual & auditory stimuli
Midbrain
•Inferior & superior colliculi: involved in relaying info b/w other brain regions
•Reticular formation: keeps us awake & alert; involved in sudden arousal needed in response to a threatening stimuli
Forebrain
•Thalamus: relays info, especially to cerebral cortex
•Hypothalamus: controls pituitary gland by releasing hormones, controls homeostatic behaviors... eating, drinking, temp control, sleeping, sexual behaviors, emotional reactions
•Hippocampus: formation of long term memory
•Amygdala: emotional behavior/learning
•Basal ganglia: motor behavior
Cerebrum
Largest structure in the brain... Consists of a layer called the cerebral cortex, which is divided into 4 lobes. The cerebral cortex carries info between the cortex & thalamus.
Central sulcus
divides frontal and parietal lobe
Parietal lobe
processing of sensory info (pain, temperature, pressure, touch) from the body. Somatosensory cortex is located here, and is contained in the postcentral gyrus.
Temporal lobe
auditory info, face recognition. Damage to this can result in memory disruption.
Frontal lobe: has 3 separate regions:
•Motor cortex: directs fine motor movement, located in the precentral gyrus
•Premotor cortex: planning movement
•Prefrontal cortex: involved in executive functioning... planning, making decisions, strategies, inhibiting inappropriate behaviors, & using working memory to process info
•Prefrontal cortex shows the longest period of maturation (one of the last brain regions to mature) and is also the first to go during aging.
R & L hemispheres
•Right hemisphere: abstraction, more holistic, recognize patterns, art & creative thinking, more visual & interpretation, intuition/faith, face recognition, music, visual imagery, imagination. Smaller than left hemi in general.
•Left hemisphere: processing things serially, logic, reasoning, rationalization, problem solving, language, math
•Males are more lateralized than females
CT scans
•Beam of X rays passed through the body from diff angles, different densities of organs reflect X rays differently, allows visualization of structures
•Used to pinpoint areas of brain damage & make inferences about the relative age of the injury
•Limit: exposure to radiation
MRI
*like CT, provides info about neuroanatomy
•Person is surrounded by strong magnetic fields, radio waves directed in, causing centers of hydrogen atoms to align; this collated info creates structure pictures
•Strengths: does not require exposure to radiation, often permits clearer pictures than CT
•Limits: Tunnel- like machine so people with claustrophobia are not good candidates for this technique, people with pacemakers or metal in their bodies are not candidates because magnetic fields from the scan interfere with electrical fields
fMRI
*relies on blood’s magnetic properties
•Brain regions showing activity have a change in the ratio of oxygenated to deoxygenated blood
•Strengths: noninvasive, non-radioactive
•fMRI….since blood has magnetic properties, can see brain almost in action
•Many versions of magnetic resonance depending on your goals and programming
•Specialized examples: diffusion tensor imaging; diffusion weighted imaging; magnetization transfer imaging; fluid attenuated inversion recovery etc….
•DTI allows measurement of displacement of water molecules, creating anatomical connectivity
PET scan
*involves radioactive labeled compound, measures blood flow to diff regions of the brain, allows electronic reconstruction of a pic of the brain, shows which areas are more active at a particular time
•PET variation uses fluorodeoxyglucose and measures metabolic changes
•PET allows detection of local changes in energy demands.. active areas require glucose and oxygen which increase blood flow
• Limit: requires expensive equipment not widely available, makes it hard to pinpoint the time course of the brain activity
SPECT
*measures cerebral blood flow
•Similar to a PET scan but does not involve expensive equipment of PET
•SPECT uses radiation (like CT and PET)
•SPECT….single photon emission is cheaper, but also uses radiation
•In all, activity is averaged over time
EEG
*detects different states of consciousness
•Provides a continuous measure of brain activity
•Metal electrodes positioned all over the scalp
•EEGs ….detect brain’s electrical response to infer states of consciousness
MEG
*measures changes in magnetic fields generated by electrical activities of neurons
•More precise localization of brain activity than EEG
•MEGs…uses magnetic properties to more precisely localize brain activity
ERP
measures an area of the brain’s response to a specific event in real time
Distal stimuli
Real world objects or things that are being perceived. To process the information about these stimuli, we must first receive the information through one or more sensory systems
*Ex. Trees, shrubs, books
Proximal stimuli
The reception of information and its registration by a sense organ; how our sensory organs receive and register information
*Ex. Light waves reflecting off the trees to the back of the eye forming the retinal image
Precept
Meaningful interpretation of proximal stimuli
*Ex. Interpreting that the stimulus is a tree, shrub, person, etc.
Bottom-up Processing
Perceiver starts with small amounts of information from the environment and combine the info in various ways to form a percept.
Form a percept from only the information of the distal stimulus.
These processes are relatively unaffected by previous learning.
Involves automatic, reflexive processing that takes place even when the observer is passively regarding the information.
Building information from bits of info from the environment that build to specified output; building a database from nothing
*Ex. Edges, shapes
Top-down Processing
The perceiver’s expectations, theories, or concepts guide the selection and combination of the information in the pattern recognition process
These expectations guide where you look, what you looked at, and how you put the information together
Ex. Knowing you are in your dorm because of your knowledge of how close the trees and other things are to your window.
Template Matching Theory (Bottom-up)
We have the stimuli represented in our memory circuits somewhere
An unknown incoming pattern is compared to all of the templates and identified by the template that best matches it
Every object, event, or other stimulus we encounter and want to derive meaning from is compared to some previously stored pattern, or template
The process of perception involves comparing incoming information to the templates we have stored, and looking for a match
If a number of templates match or come close, we need to engage in further processing to sort out with template is appropriate
Support
- Certain cells respond to certain objects
- We sometimes detect patterns from templates
Problems
- For this to work, we would have to have millions of templates
- How would we be able to develop new templates?
- People recognize many patterns as more or less the same thing, even when the patters greatly differ
Ex. Recognizing a variety of different handwritings as saying the same thing
- In everyday life, much o f the stimulus info we perceive is far from regular
Possible solutions/explanations
- Templates may be processed simultaneously
- Preprocessing allows us to recognize new objects
- Context helps us narrow template searchers
- There are technological examples of template matching: works only with clean stimuli that we know ahead of time what templates may be relevant
Feature Analysis Theory (Bottom-up)
We break down stimuli into their components, using our recognition of those parts to infer what the whole represents
Recognition of a whole object depends on the recognition of it’s features
Break down sensory info into parts-extract info
- Create features lists
Visual stimuli-broken into lines and curves
Audio stimuli- broken in phonemes and VOTs
Indentify feature by reconstructing pattern from components in LTM
Decision where set of features used are compared with features in LTM
Support
- There are certain cells that respond to specific features such as borders
- Neisser visual search task-similarities between target letters and nontarget letters make the search much harder (Z and Q), but when they have different features they are more easily detectable (A and B)
- Reduces memory load with finite set of features
- Fits with physiological data—specific retinal & cortical cells respond differently to particular features (other cells respond to objects)
- Fits behavioral data on human confusion & errors and stimulus degradations
- Data driven by features of the sensory system
Problems
- Hard to define what is meant by “feature” (e.g., face?)
- How do we know which ones to use? (remember we have to decide we know what it is)
- Do different stimuli have different sets of features since vertical lines differ greatly based on object?
- How do we put features together so fast?
Structural Theory (Biederman)
*extendend version of feature analysis; segment objects into simple geometric components called geons
- There are 36 of these and they can represent thousands of objects we can recognize when they are combined
- Not only do we pay attention to what the geons are but also the arrangement of the geons
- Comparable to phonemes
- Support-when there are incomplete drawings, we can identify the object if the intact part of the picture include object vertices (segments that allow the identification of component geons)
Prototype Matching Theory
An idealized representation of some class of objects/events
Explain perception in terms of matching an input to a stored representation of information
Instead of being a whole pattern that must be matched exactly or closely, the stored representation is a prototype: an idealized representation of some class of objects or events
Register information & compare it to LTM. An approximate match (not exact) is expected.
Objects may share only some features.
The more features & spatial relations objects share with info in memory, the greater the chance of a match
Prototypes are created simply by exposure to information
The more features a particular object shares with a prototype, the higher the probability of a match
Our brains are hardwired to pick-up pattern even when we don’t know we are doing it
Support
- Posner and Keele: participants learned to classify dot distortions into group based on the original pattern from which the distortion was derived. Participants correctly classified about 87% of the old stimuli, 67% of the new, and 85% of the prototypes. See page 77 for more details
- Cabeza et al found that participants were more likely to recognize prototype faces they had never actually seen before than to recognize other, less prototypical new faces
Problems to ponder
- How do we form and use prototypes?
- How do we know what prototype to match sensory input with?
- How do we recognize prototypes we have never experienced?
- What kind of processing is involved in matching?
Constructivist perspective
Says we actively add to or distort info in the proximal stimulus to get a percept
Direct perception (Gibson; top-down and bottom-up?)
Says we directly acquire info from the environment
Gibson believed that the perceiver does very little work leaving little need to construct representations and draw inferences
Proposed that perception consists of the direct acquisition of info from the environment
Certain aspects of stimuli remain invariant despite changes over time or in our physical relationship to them
Different biological organisms have different perceptual experiences because different organisms have different environments, different relationships to their environments, or both
Certain things are invariant regardless of time or our relationship to them
Patterns of motion are very informative to perceiver
Explains how people adjust to the environment- we perceive shapes & objects but we also perceive affordances, acts that fit with or are permitted by objects
Context influences our pattern recognition & narrows choices---it sets up expectations!
Accuracy & length of time needed to recognize objects varies with context
Problems:
- Although Gibson is considered “hip”, his proposals are not well defined…affordances are vague, hard to know what is invariant & what is not
Connectionist Models
Connectionist models can be used to recognize patterns
All sensory info processed simultaneously
Parts of stimulus & whole stimulus processed simultaneously
Knowledge or meaning in connections
Activation of any stimulus may activate or inhibit others
Can be used to explain the word superiority effect
Input is processed at several different levels, whether in terms of features, letters, phonemes, or words
Different levels of processing feed into one another; each level is assumed to form a representation of the information at a different level of abstraction
Once a node is activate, that activation spreads along that node’s excitory connections to other nodes
Parallel processing-simultaneous
There is knowledge or meaning between connections
Activation can be inhibitory or excitory
Letter detection is influenced by context
- Function as opposed to content words are more likely to disappear
- McClelland & Rumelhart’s connection model of word perception
Each level of processing forms representations at a different level of abstraction (features<letters<word)
Excitatory or inhibitory connections between nodes, which spreads, although a limited amount of activation is possible
PET scans discriminate real from fake words (Peterson et al., 1990)
Divided Attention
Cognitive resources allocated to two or more tasks simultaneously... with practice, a task can become automatic and it's easier to divide one's attention
- William James described automatic tasks as habitual processes
Selective Attention
Focusing cognitive resources on one task to the exclusion of others.
How does Broadbent's filter theory of attention work?
- since the information we can attend to is limited, a filter lets some information through based on physical characteristics, but blocks out others.
- Attention moves through a bottleneck and is sorted by physical cues.
- early selection, sensory based model
- All unattended messages are not processed for meaning at all.
- Failure to filter is associated with poorer working memory.
- It is still possible to pay attention to messages at once, for example, if two messages contain little information or are presented slowly. If the message contains a great deal of info, it takes up more mental capacity and it is unlikely for more than one to be attended to at a time.
In dichotic listening tasks, what occurs and what do people typically comprehend? Review Cherry’s contributions.
-Participants in dichotic listening tasks are played 2 different messages at the same time and are asked to shadow (repeat aloud) one of them. Cherry showed that people can, with few errors, shadow a message spoken at a normal to rapid rate. Participants report accurately whether the unattended message contained noise or speech, and the identity the gender of the voice. They did not notice a change from English to German or when lists of words are continually repeated. With backwards speech, participants at most noticed the speech was vaguely odd.
Be familiar with Wood and Cowan (1995) on backwards speech.
- In a dichotic listening task, participants were played backward speech 5 min into the task for 30s. Previous studies had showed that ~50% of participants noticed the switch (book doesn’t say the % for their study). They showed that for those who noticed the change had more shadowing errors during the 30s period. Their attentional shift to the unattended message was unintentional and without awareness.
Conway et al (2001) study which identified a role for working memory.
- Conway believed these participants had their attention captured by the backwards speech and had a lower working memory span (less able to focus). They are less able to control distractions. 20% of participants with higher working memory spans noticed their names in an unattended channel, compared to 65% of those with lower working memory.
Describe Treisman’s model and review the implications of MacKay and Pashler’s more recent follow-up studies.
- Triesman’s Attenuation theory: relies on physical, linguistic, and semantic analyses; what info is attended to depends on meaning. We do as little processing as is necessary (like in messages differ in physical characteristics then we process the messages only to this level).
- instead of the unattended message being completely blocked out, the volume is turned down.
- cocktail party effect: is based on physical cues alone, this could not happen. It depends on content, priming, and things that have meaning for us (name, etc.)
- priming (responding to one stimulus in a manner based on prior exposure to another) explains why when two messages “switch ears” in a dichotic listening task, participants also switched ears b/c they were primed to detect the words coming next in the message they were following.
- early selection: some words have lower thresholds and require little mental effort by the hearer to be recognized (like your name) and the filter adjusts to what you need
- Mackay showed the presence of a word in the unattended message that helped clarify an ambiguous sentence (river when sentence attended to is about a bank) in the attended message. At least some meaningful aspects of the unattended message are attended to. It is easier to attend to info when we are expecting it.
- Pashler showed that Mackay’s effect is greatly reduced when the message in the unattended channel contains a series of words instead of just one. If the unattended message consists of just one word, it temporarily disrupts the attention paid to the attended message.
- Unlike filter theory, information is still available in the unattended message, not blocked.
Review research exploring early selection, late selection and feature integration theories of attention.
- early selection: Broadbent’s filter theory, Treisman’s attenuation theory, cocktail party effect
-see other slide for details
- late selection: Deutsch and Deutsch theory; Norman
- all messages are processed to the level of meaning but in dif. degrees. Many of these are explained by attentional lapses. Info in unattended message received some processing.
- at least recognition of familiar objects and stimuli are processed
- the bottleneck is located later in processing, after certain aspects of meaning have been extracted.
- Galotti notes that is seems unlikely that unattended messages are processed for meaning to the same degree as attended messages.
- feature integration: also Treisman’s model; we perceive objects by registering features of objects like color and shape (divided attention or preattentive stage) and then in the focused attention stage we glue features together into a unified object; serially processed.
visual search tasks (treisman and gelade)
- detecting a circle or a blue or any other single feature is easy
- searching for objects with a combination of features (blue circle in a mix of shapes of colors) is not automatic and requires attention to accomplish
What is Kahneman's view of attention?
- attention, mental effort, and task differentiation theory: ability of resources depends on arousal. Arousal is influenced by task difficulty, dispositions, and intentions
- difficult tasks engage a higher arousal level
- we pay more attention to things we are interested in or have judged important.
- Data limited, meaning that it depends entirely on the quality of the incoming data, not on mental effort or concentration
- spotlight analogy of attention... determines where we direct our attention
What variables impact resource allocation?
task difficulty, arousal level, enduring dispositions, momentary intentions, our interests and evaluation of demands on capacity
Describe Neisser’s schema model:
- argued that we don’t filter, attenuate, or forget unwanted material, instead unattended info is simply left out of our cog. processing
- We focus on what is important to us and attend to those things.
-change blindness
- Simon and Chabris inattentional blindness exp: 46% of ppl don’t notice a gorilla/umbrella woman walking through a game of basketball passing in which participants were instructed to count the # of passes. This shows that unexpected events can be overlooked and not receive processing b/c we only perceive events we attend to.
What has Posner and others work found about the biological substrates of visual attention?
- participants had to focus on a central pt with 2 boxes on either side. An arrow appears of box brightens on each trial. In order to shift attention, 3 steps must occur:
1.) Disengage: occurs in posterior parietal lobe
2.) move: occurs in superior colliculi
3.) enhance pulvinar in thalamus
- ADHD is associated with the enhanced step.
- Posner also believes that there are three attentional networks: alerting network (frontal/parietal), orienting network that selects sensory input (frontal/parietal), and executive control (prefrontal cortex).
Describe automatic and control processes:
*automatic processing occurs w/o (1) intention, (2) conscious awareness, and (3) does not interfere with other mental activity. It also operates in parallel.
- controlled processing: operate serially, require attention, have capacity limits, under conscious control, used in non-routine tasks
- stroop task: not reading words is hard (it is automatic), but stating the color of words printed in various colors is more difficult (controlled).
Be able to describing the paradigm and findings of Schneider and Shiffrin, to further elaborate.
- in a visual search, a letter in a group of #s pops out, but a letter in a group of letters in more difficult (no pop = not automatic, we must search for it). They varied the frame time, frame size (# of variables on card), and # of variables to search for. DV = time to identify target.
- In the consistent mapping condition (ie search for a # in a group of letters), only the length of time frames were displayed influenced accuracy. Ppl needed just 80ms to get 95% accuracy. This is automatic.
- In the varied mapping (search for a letter in group of letters), ppl needed 400ms to get 95% accuracy. Participants’ accuracy depended on all 3 variables. This is controlled.
What key points might be added from the Posner & Raichlie? What does Galotti suggest about why are these criteria criticized?
- I think this is supposed to be Posner and Snyder (1975) who did the three criteria of automatic processing b/c Posner and Raichlie did the brain areas that underlie neural networks of attention. It is not explicit, but I think it is that with practice, a controlled processing task can become automatic. She uses evidence of playing videogames being hard for her, but easy for much younger kids with practice. A task that is controlled for one person can be automatic for another.
Treisman describes feature integration as another explanation of how attention occurs. What is the illusory conjunction?
- see different slide for FIT theory and its two stages (divided and focused attention).
- When attention is diverted or overloaded, participants make integration errors b/w features and objects (there is a blue Cadillac and red Honda and you see a red Cadillac). The combining of two stimuli erroneously is called an illusory contour. Putting these features together requires attention and integration of information.
What can we learn from Spelke, Hirst & Neisser?
Examined dual task performance... how difficult is doing two or more tasks at once, and on what factors does this ability depend?
•Two Cornell students were tested over the course of 17 weeks, were asked to write down words dictated while they read short stories
- reading comprehension was periodically tested
•After 6 weeks: reading rates approached normal speeds, and reading comprehension scores while writing down words were comparable to comprehension scores while only reading stories
- The participants could also categorize the dictated words by meaning and could assemble relationships between words (without affecting reading speed/comprehension)
•Hypothesis: participants alternated their attention between the two tasks, although the authors argued that since reading speeds were comparable between conditions, the participants would have to switch back and forth with no measureable lag

Follow-up study (Hirst, Spelke, Reaves, Caharack, and Neisser): same experimental setup, but some participants read short stories while others read encyclopedia articles
•Short stories = more redundant material, so requires less attention
•Encyclopedia articles = less redundant material, so requires more attention
After participants reached normal reading speeds and reading comprehension during dictation, their tasks were switched (short stories to encyclopedias, encyclopedias to short stories)
**Six out of seven participants performed comparably with new reading material, so participants were probably not rapidly switching attention between the two tasks (reading and writing words)
How can we optimize our effort?
By maximizing our level of arousal/state of alertness
-High arousal = more cognitive resources available to devote to various tasks
-But… level of arousal also depends on difficulty of task (easy tasks = less arousal)
- Arousal “allocation policy” is affected by:
Individual’s enduring dispositions (preference for some tasks over others)
Momentary intentions (motivation to do something before something else)
Evaluation of the demands on one’s capacity (the knowledge that a task you need to do right now will require a certain amount of your attention)
Cell phone Research
Dual-task performance - Strayer and Johnston: “Driving” study, participants used a joystick to move a cursor on a computer to keep it positioned over a moving target, and responded to the flashing of either a red or green light (meaning “brake” or “ignore,” respectively).
- Participants first did this task by itself, then did it either listening to the radio or talking on the cell phone
- Task by itself/with radio: no red lights missed, normal reaction time
- Task with cell: missed red lights, slow reaction time
Second study (same experimenters)
- Participants talked on a cell phone and either “shadowed” lists of words or performed a word-generation task (ex. Think of a word that begins with the last letter of the word study).
- Shadowing words did not lead to a reliable decrease in performance, but generating words did
Conclusions: there’s a limit on the number of things we can actually do at once
- When individual tasks get more demanding, it becomes harder and harder to do them simultaneously
Describe research on attentional capture.
-Theeuwes, Kramer, Hahn, and Irwin: Participants watched a screen with six gray circles with small figure 8’s inside. After 1000 milliseconds, all but one of the gray circles changed to red, and all the 8’s changed to letters. Only one circle remained gray. Participants were instructed to move their eyes to the only gray circle and decide as quickly as possible if the letter in it was a normal C or a reverse C.
- A seventh red circle would appear 50% of the time on the screen (without warning), and even though it wasn’t in the instructions, the participants tended to look at it, delaying their reaction time to make a decision on the actual task
Follow-up study: when participants were told where to anticipate the gray target circle in a specific location on the screen, they didn’t have the “attention capture” effect when the new, irrelevant stimulus appeared
•This tells us that top-down processes intentionally controlled by a participant can override the passive and reflexive attentional capture.
What do we know about inhibition in attention?
What is meant by negative priming?
Negative priming slows down responses to targets that were recently errors. This can happen across physical, linguistic, semantic and even behavioral cues. So, if we have been cued to avoid dogs, even if this changes, our representations will not automatically engage with them, even when it has been encouraged. It is hard to imagine its utility, except it slows down an automatic response.
How has this been used to study inhibition?
Why does inhibition matter anyway?
What do we learn from Tipper’s work?
Tipper suggests, partly based on primate work and disease, that negative priming illustrates an inhibitory mechanism at work. When inhibition fails to kick it, that may be signalling a breakdown in cognitive processes, such as is seen with schizophrenia and dementias. That is, there is no negative priming in people who suffer from these diseases. Failing to negative prime may be a sign of neurological deterioration.
Attention Hypothesis of Automatization
Attention is needed during the practice phase of a task and determines what gets learned during practice
•Attention also determines what will be remembered from the practice
•“Learning is a side effect of attending: People will learn about the things they attend to and they will not learn much about the things they do not attend to.” (Logan et al.)
- Logan et al. experiment: college student participants were shown a series of two-word displays and asked two detect particular target words (ex. words that named metals) as fast as possible
Some participants had words consistently paired together, others had shuffled words... the participants with paired words had an advantage over shuffled word participants, but only when participants were “forced” to pay attention to both words
•Ex: one word is colored green, then participant is asked to categorize only the green word
- Because the color captured attention, participants found it easier to ignore the distractor word
Psychological Refractory Period
the slowed response time to a second stimulus at short intervals between the first and second stimulus (shorter intervals = longer response time)
The PRP is significant: at the stage in which a response is selected or chosen.
- Stimulus 1 and stimulus 2 are perceived nearly simultaneously, but the second response can’t be produced until the first response has been completed (part B in diagram on p. 143)
(versus at the stage of perception of the stimulus or the stage of making a response)