• Shuffle
    Toggle On
    Toggle Off
  • Alphabetize
    Toggle On
    Toggle Off
  • Front First
    Toggle On
    Toggle Off
  • Both Sides
    Toggle On
    Toggle Off
  • Read
    Toggle On
    Toggle Off
Reading...
Front

Card Range To Study

through

image

Play button

image

Play button

image

Progress

1/148

Click to flip

Use LEFT and RIGHT arrow keys to navigate between flashcards;

Use UP and DOWN arrow keys to flip the card;

H to show hint;

A reads text to speech;

148 Cards in this Set

  • Front
  • Back
  • 3rd side (hint)
Gestalt Principles
SPUCCRF
“The whole is more than a sum of its parts.”

Law of Prägnanz:
Individuals organize their experience in as simple, concise, symmetrical and complete a manner as possible.
We impose structure on what we see.
Group disparate elements in a visual scene into the most coherent and stable form.

Similarity
Proximity
Uniform connectedness
Closure or line termination
Colinearity (line orientation is close to neighnor's)
Relatability or continuity (easy to connect 1 line to another)
Figure/ground organization (distinguish between a figure and background; factors distinguishing figure and background incllude: boundary belongs to the figure, surroundedness, size [figure is smaller], figure is higher contrast, convexity, and symmetry)

Superman's Pink Umbrella Can Cut Rainbows Fast
Binding Problem
How do we associate different features so that we perceive a single object?
Spatial co-occurence is insufficient to answer binding problem
Attention is required for some grouping processes
Visual Agnosia
Sight is unimpaired yet recognition fails
Viewpoint dependence
An object can be viewed from an infinite combination of possible angles and possible distances, each of which projects a slightly different 2D image on a plane and on the retina, varying size, orientation or both. All that is available from any one viewpoint is the two-dimensional projection. Even so we can recognize an object as 3D. Why and how?
Exemplar variation
There are many different instances of each object category. Any object category consists of many possible examples, yet we readily recognize dining chairs, beach chairs, office chairs, and rocking chairs as all being chairs.
Expertise hypothesis
A specialized neural system develops that allows expert visual discrimination, and is required to judge subtle differences within any particular visual category is possible that the specialized neural system in the fusiform gyrus is responsible for any recognition process for which we have expertise.
Contrasting view:

Many—if not most—visual representations are spatially distributed throughout the ventral pathway. Perhaps the ventral temporal cortex often serves as an all-purpose recognition area for telling apart all different types of objects
Template Matching Models
A template is a pattern (representation in memory) that can be used to compare individual items (sensory input) to a standard (representation in memory).

Useful as long as the item to be recognized and the template to which the system compares it are almost identical and different from others.

Problems:

cannot accommodate variations in object size and orientation

cannot accommodate obstructions or transformations

Recognition often demands great flexibility
Feature Models
Feature-matching models search for simple but characteristic features of an object; their presence signals a match.

May depend also on how difficult it is to see, and how closely the object matches the “canonical,” or traditional, picture of it.

Feature matching seems to be a mechanism for recognition that can be used by the brain to recognize categories of objects rather than individual entities.

Problems:

Could not distinguish objects with the same component features but arranged in a different spatial relationship
Evidence

Physiology (recordings from neurons)

Stabilized retinal images: eye is constantly moving. when we capture proximal stimulus using a camera when eye is focused on something we see different features with each movement

Visual search
Recognition by Components Theory (RBC)
The RBC model detects the geons and their spatial relations and attempts to match the assembled parts to a stored three-dimensional representation of a known object

Assumes that any three-dimensional object can be generally described according to its parts and the spatial relations among those parts.

Geons (parts): 36 geometrical three-dimensional shapes, such as cylinders and cones, can be used to represent just about any object. Each geon is associated with a set of viewpoint-invariant or non accidental (in image regardless of direction from which object is viewed) properties that uniquely specify it from the other geons.

Spatial relations among geons must be defined: e.g. a cone might be “on top of” or “attached to the side of” a cylinder.

Problems:

Contradicted by the observation that many neurons fail to generalize across all possible views.

Unclear how it can be applied to our recognition of natural objects such as animals or plants.

Unclear how it can be applied to faces
Evidence:

Repetition Priming: We can recognize objects faster when the same geons have been previously activated. E.g. word recognition - lexical decision and geon priming experiment

Partial or degraded objects: Eliminating certain line segments makes recognition a much harder task than erasing other line segments. The crucial line segments appear to be the ones that clearly demarcate geons (non accidental properties).

Object complexity

Unusual orientations
Configural Models
Objects that share the same parts and a common structure are recognized according to the spatial relations among those parts and the extent to which those spatial relations deviate from the prototype, or “average,” object.

Explain how we recognize different individual examples of a category; especially successful in the domain of face recognition

In a configural model, specific faces are described by their deviations from the prototypical face, as defined by quantified average proportions in a population. All faces would have the same component parts in the same spatial arrangement, but their relative sizes and distances make each unique.

Only upright faces are processed in this special way. Inverted faces, like objects that are not faces, are processed in a piecemeal manner, whereas upright faces elicit more configural or holistic processing
Context Effects
When perception of an object is affected by its context/environment.

Context effects:
Determine how we see things (subjective contours, 13 vs. B).
Whether we perceive them at all (Star/parallelogram)
How well we can recall (objects out of context).

Feature and Group Processing:

Context—including our knowledge, beliefs, goals and expectations—leads to a number of different assumptions about visual features.

Brightness Illusion
Size Illusion
Grouping

Object Recognition:

Recognition of an object may be improved if it is seen in an expected context or a customary one, and impaired if the context is unexpected or inconsistent with previous experience.

Recognition is dependent on our previous experience with the world and the context of that experience.

Influence of context on recognition of simple objects may be based on allocation of attention, or strategies for remembering or responding to objects in scenes.

Context effects in object recognition reflect the information that is important for and integral to the representation of objects.

Word Superiority Effect
Face Superiority Effect
Ebbinghaus Illusion
An optical illusion of relative size perception. In the best-known version of the illusion, two circles of identical size are placed near to each other and one is surrounded by large circles while the other is surrounded by small circles; the first central circle (surrounded by large circles) then appears smaller than the second central circle (surrounded by small circles).

Discovered by Hermann Ebbinghaus (1850-1913)

Example of contrasting grouping context effect
Word Superiority Effect
The context of surrounding letters can manipulate the perception of a target letter. A whole word is recognized by the combined influence of all the letters, thus supporting the identification of each letter because of its context.

Context Effect for Object Recognition
Prosopagnosia
The inability to recognize different faces, caused by damage to the fusiform face area, a part of the temporal lobes.

This is a disorder of FACE recognition, but NORMAL (or near normal) object recognition.

Dissociation of one function, and not another, suggests perhaps that these two functions rely on different neural substrates (in other words, different parts of the brain).
Some prosopagnosics (farmer, bird-watcher) have other, non-face disorders.

Specialized neural machinery for visual tasks that meet these two criteria:
1. Need to recognize individuals within a category e.g. People, birds, dogs, cows.
2. Category has to be a familiar one. If you don’t know anything about cows – not affected.

Possible Solution is two separate recognition systems:
One system breaks things down into parts and recognizes the parts, like feature detectors and geons
The other system pays attention to configurations of parts, which are important for faces.
Face Superiority Effect
Participants are better at learning the difference between two upright faces that differ only by the shape of a single element, such as the nose, than at learning the difference between two noses shown in isolation

Context Effect for Object Recognition
The context effect on nose identification disappears if the faces are inverted.

Parts of an upright face are not processed independently, but rather are recognized in the context of the whole face.

Our recognition of one part of an image is often dependent on our processing of other aspects of that image.
Brightness Illusion
Context effect for feature and group processing

We expect the brick wall to be “in reality” the same color throughout, so despite the evidence before us caused by changes in illumination across its surface, we believe it to be all the same color
Size Illusion
Context effect for feature and group processing

We assume objects maintain their “true” size across changes in apparent distance from the observer.
Grouping
Context effect for feature and group processing

Context effects produced by a group of items can make it difficult to see each item independently, but it allows us to see common attributes of many items at once. We can then perform operations on the group as a whole.

The context effects produced by a group can also be contrasting, making one odd item in a group look even more unusual than in fact it is. E.g. Ebbinghaus Size Illusion
Agnosia
The inability to recognize objects, resulting from brain damage
Change Blindness
Failure to detect changes in the physical aspects of a scene
Simons and Levin (1998)

An experimenter stopped pedestrians on a college campus to ask for directions. During each conversation, two people carrying door walked between the experimenter and the pedestrian. As they did, the experimenter switched places with a second experimenter who had been concealed behind the door as it was being carried. This second experimenter then continued the conversation with the pedestrian. Only half the pedestrians reported noticing the change of speaker—even when they were explicitly asked, “Did you notice that I am not the same person who first approached you to ask for directions?”
Divided Attention
More than one source of input is attended to at the same time
Attentional Blink
A short period during which incoming information is not registered

The phenomenon of attentional blink also occurs for two objects (not just letters) that are presented in rapid succession.

The hallmark of attentional blink is the missed detection of a stimulus that is presented within a particular time frame after an earlier stimulus is presented. When stimuli are presented so quickly, attention to the first seems to preclude attention to the second—showing the failure to select items in time.
Repetition Blindness
The failure to detect the later appearance of a stimulus when the stimuli are presented in a rapid sequence has been termed repetition blindness

Repetition blindness can occur for words as well as for objects

It is believed that the failure to encode the second stimulus occurs because it is not individuated or selected as a distinct event when it rapidly follows the first. Instead, the second occurrence is assimilated to the first, and only one event is registered.

When we do not have much time, we do not form a second, separate representation of an object we have just processed and so are not aware of the repetition.
Kanwisher and colleagues (1997b) showed participants a sequence of nine serially presented displays with two or three consecutive pictures sandwiched in between visually “noisy” patterns called masks. A large masking field was also shown at the beginning and end of each trial.

Each image was shown for 100 milliseconds.

When the first and third picture in the series were identical, participants were markedly less likely to report seeing the third picture (the repeat).

This was also true when the first and third pictures depicted the same object even if the objects were of different sizes or were shown from different viewpoints.

When the first and third picture were different, participants had no difficulty reporting their identities
Dual Task Interference
The decrement in performance from dividing one’s attention between 2 sources of information

There is greater interference when the sources are both the same type of information than when the sources are different types of information

The failure to select information can occur even if the two sources of information are of two different types, or even if the information is presented in two different sensory modalities, although the interference is not as great as when the types of information are the same.
Hemispatial Neglect
A deficit of attention in which one entire half of a perceived visual, auditory or olfactory scene is simply ignored.

Often caused by a stroke that has interrupted the flow of blood to the right parietal lobe, a region of the brain that is thought to be critical in attention and selection

Do not attend to (that is, they fail to select) information on the side of space opposite the lesion

Not blind, but do not seem to orient toward information on the left side (opposite lesion) of the scene and attend to it

If left on own, hemispatial neglect patients ignore the half opposite lesion, but if their omission is pointed out they can reexamine the scene and obtain the information previously missed

This deficit is also present in mental imagery.

Very strong and salient information that appears on the neglected side of the input may successfully capture the patient’s attention in a bottom-up fashion

Top-down guidance may also be helpful: specifically instructing the patient to attend to the left may reduce the extent of the neglect, but such guidance must be frequently reiterated.

There may also be a failure to select information in time.

Spatial (left-right) and temporal attentional mechanisms interact in determining how much information is neglected.

Damage to the right hemisphere gives rise to neglect more often and with greater severity than does damage to the left hemisphere as the areas involved in processing language are generally in the left hemisphere, and attentional and spatial processes may therefore have been shifted into right hemisphere.
Disengaging Attention
1 of 3 mental operations in model of attention by Posner and colleagues.

For right hemisphere patients, when the cue directed attention to the non-neglected right side and the target appeared on the neglected left side, the patients had trouble disengaging attention from the good right side, and this deficit produced the dramatically slower target-detection times.

No “disengage” problem was apparent for targets on the non-neglected side when the preceding cue indicated the neglected side.
Response Bottleneck
The interference that arises when you try to select between multiple possible responses to even a sole sensory stimulus

A bottleneck in attention can also occur when, even with a single sensory input, the outputs required are too great.

Coordinating multiple output responses is more difficult than simply making a single response. There is usually some associated cost or failure even when one is skilled.

The failures in motor output (e.g. slowing down of actions) when you try to do a number of things at once or in quick succession do not result from a limitation in the ability to program your muscles to move.
Focused Attention
Concentration on one source of input to the exclusion of any other
Change Deafness
Failure to detect changes in voices in an auditory scene
Moving Attention to a New Location
1 of 3 mental operations in model of attention by Posner and colleagues.

Patients with damage to the midbrain and suffering from a disorder called progressive supranuclear palsy seemed to have no difficulty with “disengage” or ”engage” operations but were slow in responding to cued targets in the direction in which they had difficulty orienting, suggesting a problem in moving attention to the cued location.
Engaging Attention in a New Location
1 of 3 mental operations in model of attention by Posner and colleagues.

Patients with lesions to the pulvinar, a part of the thalamus (a subcortical structure), were slow to detect both validly and invalidly cued targets that appeared on the side of space opposite their lesion, but performed well when the targets appeared on the intact side.

The thalamic patients cannot engage attention on the affected side.
Object Based Attention
When attention is directed toward an object, all the parts of that object are simultaneously selected for processing

Attention can be directed to a single object and all features of the object attended.

In object attention, more than one feature is simultaneously selected and the corresponding neural areas reflect this coactivation.
In one of the best-known studies participants saw in the center of a computer screen a rectangular box with a gap in one side and a line through the box.

When they were instructed to respond with two judgments about a single object—whether the box was big or small in relation to the frame of the screen and whether the gap was on the left or right side—accuracy of report was high.

Similar results were obtained when participants were asked to make the two judgments about the line itself

In a further condition, the two judgments the participants were asked to make concerned the box and the line, for example, the size of the box and the texture of the line. This time, although again no more than two judgments were required, accuracy fell significantly.

Both objects—box and line—were superimposed one on the other in the center of the screen, thus occupying the same spatial location the results with one object (box or line) and two objects (box and line) cannot be explained by preferential attention to a particular location in space.

Our perceptual system can handle two judgments quite well when attention is focused on a single object. When attention must be divided across two different objects, however, making two judgments becomes very difficult and performance suffers badly
Illusory Conjunctions
Incorrect combinations of features

Features are registered separately but they are not properly bound together.

When attention is overloaded or the features are not selected together, the isolated features remain unbound and may be incorrectly attached to other features

Supports FIT
Cherry's Selective Attention Studies
Using a technique called dichotic listening (the literal meaning is listening with “two ears”), Cherry played competing speech inputs through headphones to the two ears of his participants.

For example, the right ear might receive “the steamboat chugged into the harbor” while the left ear simultaneously received “the schoolyard was filled with children.”

Cherry instructed participants to ”shadow,” that is, to follow and repeat (attend to) as rapidly as possible one stream of speech input and to ignore the other.

Cherry found that participants had no memory of what was played in the unattended ear.

They did not even notice if the unattended message switched to another language or if the message was played backward.

They did notice whether the sex of the speaker was different or whether the speech became a pure tone.

Unattended inputs are filtered out and attended signals are admitted through on the basis of their physical characteristics. Changes in the physical aspects of a stimulus were attended, but if there were no such changes, the stimulus would either be attended or filtered out.

Failure to detect repeated word lists in the unattended ear indicates that the unattended signals were not processed deeply and the participants did not have a representation of the words or their meaning.
Visual Search Task
Visual search is the common task of looking for something in a cluttered visual environment. The item that the observer is searching for is termed the target, while non-target items are termed distractors. Visual search can take place either with or without eye movements, and typically involves an active scan of the visual environment.

Includes feature/ disjunctive search and conjunctive search
Distractors
Non target items or stimulus that clutter up the visual environment of a visual search task. Supposed to be ignored
Feature/ Disjunctive Search
Type of visual search task where the target differs from the distractors by a single feature
Conjunctive Search
Type of visual search task where the target differs from the distractors by a conjunction of features (multiple features)
Feature Integration Theory (FIT)
The perceptual system is divided into separate maps, each of which registers the presence of a different visual feature: color, edges, shapes.

Each map contains information about the location of the features it represents.

In feature/disjunctive search, consult map, which contains all the features present in the display. The target feature you are looking for will pop out of this map and target detection proceeds apace, irrespective of the number of distractors of another shape.

Individual Feature processing is done in parallel. Simultaneous processing is done on the whole display and if feature is present-- we detect it.

In conjunctive search, the joint consultation of multiple maps for multiple features is necessary.

Attention is required to compare the content of the two maps and serves as a kind of glue to bind the unlinked features.

Conjunctive searching requires attention to the integration or combination of the features.

Attention to particular combination of features must be done sequentially to detect presence of a certain combination.

You can search faster for the presence of a feature than for its absence
Problems

According to FIT, disjunctive search is preattentive and does not engage attention, whereas conjunctive search does involve attention.

FIT predicts that hemispatial neglect patients would be able to perform disjunctive search well, even when the target appears on the neglected side. The findings suggest that this is not true.

Disjunctive search may require attention and the preattentive-attentive distinction between these forms of search may not hold.

Behavioral studies with neurologically unimpaired participants have found that some conjunctions are easier to detect than a purely serial search model predicts
Simultanagnosia
The inability to recognize two things at the same time.

Can be seen in patients with Balint’s syndrome. This neurological disorder follows after bilateral (that is, on both sides of the brain) damage to the parietal-occipital region.

Balint’s patients neglect entire objects, not just one side of an object as in hemispatial neglect. The disorder affects selection of entire objects irrespective of where the objects appear in the display and a whole object (a line drawing) may be neglected even if it occupies the same spatial position as another drawn object.

These patients are able to perceive only one object at a time; it is as if one object captures attention and precludes processing of any others.

The failure to select more than one object can be reduced if the objects are grouped perceptually.
Integrated Competition Theory of Attention
Developed by Desimone and Duncan (1995) and Duncan and colleagues (1997).

Attention is seen as a form of competition among different inputs that can take place between different representations at all stages of processing.

In a simple competition model, the input receiving the greatest proportion of resources would be the one that is most completely analyzed.

The competition between inputs can be biased by the influence of other cognitive systems.

Focusing on visual processing, Desimone and Duncan (1995) argue that attention is “an emergent property of many neural mechanisms working to resolve competition for visual processing and control of behavior”

Attention is as an integral part of the perceptual or cognitive process itself. Competition occurs be-cause it is impossible to process everything at once; attention acts as a bias that helps resolve competition between inputs.

The source of the bias can come either from the features in the external stimulus (exogenous) or from the relevance of a stimulus to one’s goals of the moment (endogenous).

The competition that takes place between possible inputs occurs in multiple different brain regions.

The theory holds that many different brain regions are involved in such competition, and because they are connected, the competition is integrated across them. The ultimate winner of the competition—the item that is ultimately attended—is determined by a consensus among all the different regions working together.

Attention is a gating mechanism that biases processing according to a combination of external salience and internal goals. The outcome of the competition is a winner that is selected for further, preferential processing.

Attending to a single object effectively reduces the amount of competition from other stimuli and biases processing toward other demands.

The advantage of this theory is that it underscores the idea that attention is a bias in processing, and that processing occurs through cooperative and competitive interactions among brain areas.
Failures of selection in space or in time can be explained by the idea of competition between stimuli.

In covert attentional cueing the invalid trials can be thought of as cases in which there is competition between the location indicated by the invalid cue and the location where the target appears; in the valid condition, the location cue and the location where the target appears; in the valid condition, the location of the valid cue and the location of the target are one and the same, and hence co-operation rather than competition prevails.

Divided attention be interpreted as the result of competition between different inputs or different tasks, as opposed to the noncompetitive case, in which the focus is exclusively on a single input or a single task.

The improvement, in the form of automaticity, that comes with greater practice with dual tasks may be thought of as a reduction in the competition between the two tasks.

Moreover, the performance of patients with hemispatial neglect can also be understood within this framework of competition. If the damage to the right side of the brain allows the intact hemisphere to produce a bias away from the left side and toward the right side, that bias increases the competitive strength of right-sided stimuli and reduces that of left-sided stimuli.

The failure to report T2 in the attentional blink task might arise from competition between T1 and T2. Reporting T2 when it is not preceded by T1 is not problematic—there is no competition. However, the requirement to report T2 when it is preceded by T1 and is very similar to Tl in appearance establishes a competitive environment where chances of detection of T2 decrease

Competition may also explain the failures of selection in time observed with patients with hemispatial neglect. When visual stimuli are presented on both sides, reporting of the stimulus on the neglected side improves depending on the timing of presentation of the two stimuli and their grouping. One might think of these two factors, time and grouping, as biases that can influence the outcome of the competition between stimuli on the right and on the left.

Behavioral experiments can be interpreted in terms of a competition between “stronger” and “weaker” stimuli, with strength defined by a combination of bottom-up and top-down influences.
Broadbent's Theory of Attention
Broadbent's model: Incoming stimuli, briefly held in a sensory register, undergo preattentive analysis by a selective filter on the basis of their physical characteristics. Those stimuli selected pass along a (very) limited capacity channel to a detection device where semantic analysis takes place. Those stimuli not selected ('filtered' out) are not analysed for meaning and do not reach consciousness. This is, therefore, an early selection theory, and an 'all or nothing' view of perception.
Treisman's Theory of Attention
Treisman's model: Incoming stimuli, briefly held in a sensory register, undergo preattentive analysis by an attenuation filter on the basis of crude physical characteristics (the information resulting from this analysis is available to conscious perception and for reporting by the subject, regardless of what happens to the message beyond this point). Those stimuli selected (attended to) pass along a limited capacity channel to a detection device (a pattern recognizer, comprising a number of 'dictionary' units) where semantic analysis takes place. Unattended stimuli are attenuated (the signal strength is lowered) before passing along the limited capacity channel to the detection device, where they are semantically processed if they meet certain criteria. This is, therefore, an early selection theory, and an attenuation model of attention.
Early Attentional Selection
Before the bottleneck, information is processed perceptually. Bottleneck selects important info which is processed for semantic content after bottleneck

The many sensory inputs capable of entering later phases of processing have to be screened to let only the most important information through.

At an early stage of processing, information comes into a very brief sensory store in which physical characteristics of the input are analyzed – Visual modality includes motion, color, shape, and spatial location. Auditory modality includes pitch, loudness and spatial location.

Only a small amount of information, selected on the basis of physical characteristics, passes through for further, semantic processing.

Attentional system contains a limited-capacity channel through which only a certain amount of information could pass.
Late Attentional Selection
Theory of late selection — Before the bottleneck, all information is processed perceptually to determine both physical characteristics and semantic content

Can account for the cocktail party effect (the finding that some information could be detected in the unattended channel even when there was no change in its physical features, especially if the information was salient and important to the participant.)
Inattentional blindness
Denotes the failure to see highly visible objects we may be looking at directly when our attention is elsewhere.

Has auditory, and tactile counterparts
Neisser experiment

Watch 2 superimposed ball passing games by a black group and a white group and count how many passes between members of one of the groups; Observers failed to notice a woman walking through the basketball court with an open umbrella

Experiment has been replicated with a gorilla thumping its chest, and a moonwalking bear

Observers generally do not see what they are looking at directly when they are attending to something else
Inattentional blindness vs Inattentional amnesia
A gorilla thumping its chest is pretty hard to quickly forget.

Unseen stimuli are capable of priming which can only occur if there is some memory of the stimulus even if the memory is inaccessible
Moore and Egeth Studies
Shows evidence that aspects of visual processing take place before attention is allocated.

Shows that under conditions of inattention, basic perceptual processes, such as those responsible for grouping elements in the visual field into objects, are carried out and influence task response even though observers are unable to report seeing the percepts that result from those processes.

Moore and Egeth investigated the Muller Lyer illusion.

Fins were formed by grouping background dots, i.e. dots forming the fins were closer together than background dots.

They demonstrated that subjects saw the illusion even when because of inattention the fins were not consciously perceived
What "Captures" Our Attention
Stimuli able to capture attention when attention is elsewhere are complex and meaningful, suggesting that attention is captured only after the meaning of a stimulus is analyzed

If attention operates before meaning has been analyzed, small changes would not cause an increase in IB
For meaning to be analyzed before attention is captured, the stimuli we do not intend to see and that do not capture our attention must be fully processed by the brain.

Silverman demonstrated that there can be priming by more than 1 element in a multielement display even when these elements cannot be reported by the subject, suggesting that unattended or unseen elements are indeed processed and stored but says nothing about how many.

One answer to the question of how much of what is not seen is encoded into memory comes from an account of perceptual processing based on the assumption that perception is a limited capacity process and that processing is mandatory up to the point that this capacity is exhausted

The extent to which unattended objects are processed is a function of the difficulty of the perceptual task

High difficulty = only attended stimuli encoded

Low difficulty = Unattended stimuli encoded

Unclear how perceptual load should be estimated


Neuro-imaging Evidence

Are attended and unattended stimuli processed differently by the brain?
In one study, Scholte, Spekreijse, and Lamme found similar neural activity related to the segregation of unattended target stimuli from their backgrounds, an operation that is thought to occur early in the processing of visual input

This activation was found regardless of whether the stimuli was attended and seen or unattended and unseen, although there was increased activation for targets that were attended and seen

In another study, Rees and his colleagues used fMRI to picture brain activity while observers were engaged in a perceptual task and found no difference between the neural processing of meaningful and meaningless lexical stimuli when they were ignored, but when the same stimuli was attended to, a difference between the neural processing of meaningful and meaningless lexical stimuli occurred.

This suggests that unattended stimuli are not processed for meaning

In another study repeating Rees’ procedure, but including the subject’s own name among the ignored stimuli, many subjects saw their names, suggesting that meaning was analyzed

Unclear if attended and unattended stimuli are processed differently from brain
Do Visual Imagery and Visual Perception Share the Same Brain Areas?
Yes, they share the same brain areas:

Mental images appear to embody spatial layout and topography, as does visual perception.

Recent neuroimaging studies have also provided support for the involvement of spatially organized representations in visual imagery
For example, when subjects form a high-resolution, depictive mental image, primary and secondary visual areas of the occipital lobe (areas 17 and 18, also known as V1 and V2), which are spatially organized, are activated.

Additionally, when subjects perform imagery, larger images activate relatively more anterior parts of the visual areas of the brain than smaller images, a finding consistent with the known mapping of how visual information from the world is mediated by different areas of primary visual cortex.

Moreover, when repetitive transcranial magnetic stimulation is applied and disrupts the normal function of area 17, response times in both perceptual and imagery tasks increase, further supporting the involvement of primary visual areas in mental imagery
Neuro-imaging Data Shows Overlap of Imagery and Perception in Brain
Not only early visual areas but also more anterior cortical areas can be activated by imagined stimuli

For example, when subjects imagine previously seen motion stimuli (such as moving dots or rotating gratings), area MT/MST, which is motion sensitive during perception, is activated

Color perception and imagery also appear to involve some (but not all) overlapping cortical regions, and areas of the brain that are selectively activated during the perception of faces or places are also activated during imagery of the same stimuli

Higher-level areas involved in spatial perception, including a bilateral parietooccipital network, are activated during spatial mental imagery, and areas involved in navigation are activated during mental simulation of previously learned routes.

As is evident, there is considerable overlap in neural mechanisms implicated in imagery and in perception both at lower and at higher levels of the visual processing pathways.

Ganis Et Al. Data
Activation for perception task and imagery task is identical
If you subtract one from the other, there is no activation
Damage to Visual Cortex Also Disrupts Visual Imagery
Many patients with cortical blindness (i.e., blindness due to damage to primary visual areas of the brain) or with scotomas (blind spots) due to destruction of the occipital lobe have an associated loss of imagery, and many patients with visual agnosia (a deficit in recognizing objects) have parallel imagery deficits.

In some of these latter cases, the imagery and perception deficits are both restricted to a particular domain; for example, there are patients who are unable both to perceive and to image only faces and colors, only facial emotions, only spatial relations, only object shapes and colors, or only living things.

The equivalence between imagery and perception is also noted in patients who, for example, fail both to report and to image information on the left side of space following damage to the right parietal lobe.

There are, however, also reports of patients who have a selective deficit in either imagery or perception.
Selective Damage to Imagery But Not Perception In Some Patients
This segregation of function is consistent with the functional imaging studies showing that roughly two thirds of visual areas of the brain are activated during both imagery and perception. That is, selective deficits in imagery or perception may be explained as arising from damage to the nonoverlapping regions.

Selective deficits are particularly informative and might suggest what constitutes the nonoverlapping regions.

Unfortunately, because the lesions in the neuropsychological patients are rather large, one cannot determine precise anatomical areas for these nonoverlapping regions, but insights into the behaviors selectively associated with imagery or perception have been obtained.

Patients with impaired imagery but intact perception are unable to draw or describe objects from memory, to dream, or to verify propositions based on memory. It has been suggested that in these cases, the process of imagery generation (which does not overlap with perception) may be selectively affected without any adverse consequences for recognition.
Imagery May Depend on Higher, Not Lower Visual Primary Cortical Areas
It has also been suggested that low- or intermediate-level processes may play a greater (but not exclusive) role in perception than they do in imagery.

If a patient has a perceptual deficit because of damage to these low- or intermediate-level processes, the patient will be unable to perceive the display, but imagery might well be spared because it relies less on these very processes
The Generation of Mental Images: Role of Left Hemisphere
In many studies, normal subjects show a left-hemisphere advantage for imagery generation; when asked to image half of an object, subjects are more likely to image the right half, reflecting greater left-hemisphere than right hemisphere participation in imagery generation.

Additionally, right handed subjects show a greater decrement in tapping with their right than left hand while performing a concurrent imagery task, reflecting the interference encountered by the left hemisphere while tapping and imaging simultaneously.

Studies in which information is presented selectively to one visual field (and thereby one hemisphere) have, however, yielded more variable results with normal subjects. Some studies support a left-hemisphere superiority, some support a right-hemisphere superiority, and some find no hemispheric differences at all.

Studies with split-brain patients also reveal a trend toward left hemisphere involvement, but also some variability. Across a set of these rather rare patients, imagery performance is better when the stimulus is presented to the left than to the right hemisphere, although this finding does not hold for every experiment and the results are somewhat variable even within a single subject.

Neuroimaging studies in normal subjects have also provided some support for a left-hemisphere basis for imagery generation.

In sum, there is a slight but not overwhelming preponderance of evidence favoring the left hemisphere as mediating the imagery generation process.

A conservative conclusion from these studies suggests that there may well be some degree of left-hemisphere specialization, but that many individuals have some capability for imagery generation by the right hemisphere.
Another suggestion is that both hemispheres are capable of imagery generation, albeit in different ways
Is Visual Imagery "Running Perception Backwards"?
Yes

In perception, an external stimulus delivered to the eye activates visual areas of the brain, and is mapped onto a long term representation that captures some of the critical and invariant properties of the stimulus

In Mental imagery, the long term representation of the visual appearance of an object are used to activate earlier representations in a top down fashion through the influence of preexisting knowledge
This bidirectional flow of information is mediated by direct connections between higher level visual areas (more anterior dealing with more abstract information) and lower level visual areas(more posterior with representations closer to the input)
Imagery v.s. Perception
Visual mental imagery is seeing in the absence of appropriate sensory input

Imagery is perception of remembered info

Imagery resembles perception

There is some limited capacity so images are not perfect.
Demand Characteristics
2nd Depictive v.s. Propositional Debate

Depictive representation: the data reflected the processing of depictive representations

Propositional representation: focus on possible methodological problems with experiments like experimenter expectancy effects and task demands

No experimenter expectancy effect were found in repeated experiments suggesting depictive representations that were designed to show experimenter effect

Task demands is where the instructions cause the subject to mimic what they would do if they were actually following instructions in real life

Experiments suggesting depictive representations were redone without any reference to imagery with similar results
Image Generation
Several mechanisms working together which may have other roles generate images

There are multiple ways an image can be generated

Images of objects including color, texture, and shape

One visual subsystem seems to be used to activate visual memory representations, to prime the visual system so that one can more easily encode an expected object or part may be sufficient to form images in tasks that require only a global form of the image

Images of complex objects are built up on the basis of distinct parts , each of which is activated individually

The time needed to generate images increases with each additional part to be included

A second subsystem is used to position individual parts in the image

The brain processes location information separately from shape info

Another subsystem activates individual stored perceptual units in both hemispheres, while the other juxtaposes parts effectively only in the left hemisphere

The subsystems that arrange parts into images must use a stored representation of how the parts are arranged.

2 ways to do this:

Position on the basis of categorical representations of spatial relations

Arranged on the basis of metric spatial info

Left hemisphere better at categorical representations of spatial relations
Right hemisphere better at metric spatial relations

People can form images of different types

Images can be formed by activating visual memories of global memories or of individual parts and then arranging them, and by positioning and fixating attention selectively
Image Inspection
Imagery shares processing mechanisms with like modality perception

Imagery selectively interferes with like modality perception

Unilateral visual neglect

Explains why it is difficult to reorganize complex patterns in images

Perceptual mechanisms organize input into units and spatial relations among them, and reorganizing these units requires time

Images can be retained only with effort and apparently cannot be retained long enough to reorganize them

Conditions for reinterpreting ambiguous objects:

1. People understand how to reorganize the object
2. People are prevented from verbally encoding the figure while studying it
Image Maintenance
We can retain relatively little info in an image at once

The critical measure is the number of chunks or perceptual units that are present

Effort is require to keep an image in place

The total amount that can be kept in mind depends on the speed with which parts are refreshed and the speed with which they fade
Image Transformation
Time to judge if different oriented stimuli are the same increases as the disparity in orientation between members of a pair increases

The more rotation required, the longer it takes to mentally rotate something

At least part of transformation mechanism mimics perceptual processes

May be because purpose of imagery is to mimic what would happen in actual physical situations

Motor areas of brain activate when rotating a mental image

It is possible that one forms moving images by priming the visual system to as if expecting to see the results of physically manipulating an object

If so, incremental nature of transformations is partly because our movements must traverse trajectories

Images can also be expanded and shrunk. Time increases linearly with disparity in size

Images can also be folded. Time increases linearly with number of sides to be shifted in image to fold cube

At least one mental transformation process is more effective in the right hemisphere than in the left hemisphere

Left hemisphere does play a role in image transformation
Mental Rotation
Time to judge if different oriented stimuli are the same increases as the disparity in orientation between members of a pair increases

The more rotation required, the longer it takes to mentally rotate something
Depictive Representations
Internal representation is pictorial, in the sense that it has properties that are similar to visual percepts (pictures).

Displays analog properties.

But they are certainly not exactly like pictures:
Can’t manipulate as well
Influenced by labels

Syntax:

The symbols belong to 2 form classes -

1. Points
2. Empty Space

The points can be arranged so tightly as to produce continuous variations or so sparsely as to be distinct

The rules for combining symbols require only that points be placedin spacial relation to each other.

Semantics:

Depictions resemble the represented object or objects

Each part of the representation must correspond to a visible part of the object or objects

The represented distances among the parts of the representation must correspond to the distances among the corresponding parts of the actual object.

All that is needed for a depiction is a fictional space in which distance can be defined vis a vis the interpretation of the info
Lots of evidence for analog, picture-like representations:

Mostly based on time to do tasks
Mental scanning
Mental rotation
Size judgments (zooming)
Image Scanning
Different Mechanisms? Debate 1

1. Does it take more time to shift attention greater distances across an imaged object? Yes

Depictive Reason: Distance will only affect processing times if a depictive representation is used.

Propositional Reason: People automatically construct propositional descriptions when asked to memorize the appearance of drawings that are characterized by linked hierarchies of propositions. The more links one has to travel through, the longer it takes to respond

2. Both distance and amount of material scanned over affect reaction times. Do times linearly increase with greater distance scanned over even when material scanned over is kept constant? Yes. Time to scan image increased linearly with greater distance scanned across

Depictive Reason: Images rely on depictive representations. The notion of depictive representation leads us to expect image representations to embody at least 2 dimensions.

Propositional Reason: Add distance nodes to the network that convey no other info other than that a distance exists between 1 object and another

3. Size affects scanning time as well: the larger, the more time needed
Propositional Representations
Internal representation is propositional or symbolic description

Does not display analog properties

Syntax:

Symbols belong to form classes corresponding to relations, entities, properties, and logical relations

Rules of symbol combination require that all propositional representations have at least one relation

Specific relations have specific requirements concerning the number and type of signals that must be used

Semantics:

The meaning of individual symbols is arbitrarily assigned which requires existence of a lexicon

A propositional representation is defined to be unambiguous unlike words or sentences in natural languages. A different propositional symbol is used for each of the senses of ambiguous word

A propositional representation is abstract -

It can refer to nonpicturable entities such as sentimentality

It can refer to a class of objects not simply individual ones

It is not tied to any specific modality

Either true or false (only used by some theorists)
Island Experiment

KOSSLYN, BALL, & REISER
Experiment: Participants were given a map of an island with 7 objects to memorize. They learnt to draw the locations of each object on the map. Each of the 21 pairs were a different distance apart. Participants were to close their eyes, visualize the map, focus on a specific area, and scan to another location. Participants eventually scanned between all possible pairs of objects on the map. The time to scan from one object to the next was measured. Time to scan image increased linearly with greater distance scanned across

Depictive Reason: Images rely on depictive representations. The notion of depictive representation leads us to expect image representations to embody at least 2 dimensions.

Propositional Reason: Add distance nodes to the network that convey no other info other than that a distance exists between 1 object and another
Behaviorism
Psychology is the “science of behavior.”

Emphasis on what can be directly observed:

Stimuli
Responses
Reinforcements / Rewards

Ignore the mind (unobservable).

Problems:
Can’t account for diversity of human behavior
Limiting science to the observable is a bad idea
Cognitivism
Infer what’s going on inside the box
information processing
Each stage:
1. receives information from the previous stage
2. transforms the information
3. sends information to the next stage
Dependent Variables
what you measure/analyze

reaction time
accuracy
brain activity
Independent variables
what you manipulate

number of items to be memorized
amount of alcohol ingested
passage of time
Mental Chronometry
The study of the time course of mental processes.
Simple Reaction Time
Choice Reaction Time
Donder's Subtraction Method
1. Assumption of Pure Insertion
Assumes all stages remain the same when the new one is added
Adding the decision stage may influence another stage (like detection)
Perception
The means by which information acquired from the environment via the sense organs is transformed into experiences of objects, events, sounds, tastes, etc.
Distal Stimulus
Real world stimulus
Proximal Stimulus
Image of real world stimulus on retina
Percept/ Representation
What we experience when we look at a stimulus
Nativist View of Perception
The nativist (a.k.a. rationalist) position--Much of our knowledge is based on innately given characteristics. From this perspective, sensation and perception should be “hard-wired.”
Empiricist View of Perception
The empiricist position--We are born as blank slates (tabula rasa). Thus, we must learn to sense and perceive.
Lack of correspondence
When percept does not correspond to distal stimulus
Paradoxical correspondence
When proximal stimulus does not correspond to distal stimulus, but the percept does.

Our brains are correcting for missing or misleading information.

E.g. Distal stimulus (the world) is 3D; Proximal stimulus (on retina) is 2D; Perceptual experience is 3D
Perceptual constancies
Our perception of an object’s features remains constant even when viewpoint (and proximal stimulus) changes

Perception of size doesn’t change with distance
Perception of shape doesn’t change with viewing angle
Perception of darkness/color doesn’t change with light
Stages in Perception
Distal (A tree) -> Proximal (Image on retina)-> Percept (we perceive).
Direct Perception
“Ask not what’s inside your head but what your head is inside of”

Environment provides all necessary cues

Our brains are pre-wired to pick up cues

Stimulus information is unambiguous

Direct perception claims perception is purely bottom-up
Constructivist theory
Perception uses data from the world and our prior knowledge and expectations.

Sensory information is often ambiguous.

Must rely on knowledge/expectations

Constructivism: Bottom-up and top-down processes
Bottom Up Processing
Processing that is driven by the external stimulus, rather than internal knowledge
Top Down Processing
Processing that is driven by knowledge & expectations
Is depth perception innate?
Visual cliff experiments.
Visual cliff, Devised by Eleanor Gibson and Richard Walk to test depth perception
Glass surface, with checkerboard underneath at different heights
Visual illusion of a cliff
Baby can’t fall
Mom stands across the gap

6 month olds avoid “cliff”
Species like lambs, cats, rats, that walk on 1st day after birth avoid cliff.

2 month olds heart rate goes down on cliff side. They notice depth but are not afraid.
Depth Cues
Accomodation
When the eye is relaxed and the interior lens is the least rounded, the lens has its maximum focal length for distant viewing . As the muscle tension around the ring of muscle is increased and the supporting fibers are thereby loosened, the interior lens rounds out to its minimum focal length (near vision).
Retinal Disparity (Binocular Depth Cue)
The difference between the visual images that each eye perceives because of the different angles in which each eye views the world.
Convergence (Binocular Depth Cue)
This is a binocular oculomotor cue for distance/depth perception. When two eye balls focus on the same object, they converge. The convergence will stretch the extraocular muscles. Kinesthetic sensations from these extraocular muscles also help in depth/distance perception. The angle of convergence is smaller when the eye is fixating on far away objects. Convergence is effective for distances less than 10 meters.
Motion Cues
Motion parallax - When an observer moves, the apparent relative motion of several stationary objects against a background gives hints about their relative distance.

Kinetic depth perception - determined by dynamically changing object size. As objects in motion become smaller, they appear to recede into the distance or move farther away; objects in motion that appear to be getting larger seem to be coming closer.
Transduction
Sensory principle I

Senses must convert physical stimulus energy (e.g., chemical molecules) into electrical changes in nerve receptor cells
Neural Coding
The stimulus input must be processed and coded for intensity (i.e., strong smell vs. weak smell, bright vs. dim light, loud vs. soft sound) and qualitative aspects (e.g., red vs. blue, foul vs. pleasant, A flat vs. B sharp).

Typically, much of this coding happens at post-receptor sites.

Color coding begins at level of the receptors.
Interactivity
In Time: Color Adaptation

In Space: Contrast Effects

Top-Down and Bottom-Up Interactions
Rods
operate under low illumination and are achromatic -- night time receptors.

allow us to see in dim light
can not see fine spatial detail
can not see different colors
detect motion/peripheral vision
Cones
operate under high illumination. Chromatic. Packed around fovea -- daytime receptors.

allow us to see in bright light
allow us to see fine spatial detail
allow us to see different colors
Threshold
Potential must get above a threshold level for neuron to fire.

Firing = generating an action potential
All of none
Action potential always has same strength. Either you get all of it (if above threshold) or none of it.
Propogation
Once past threshold, active process (ion pumping) propagates action potential down axon
Refractory Period
Short period after firing before neuron can fire again. Places upper limit on rate of neural firing.
Retina Structure
Receptive Fields of Ganglion Cells
Receptive field for a particular ganglion cell is the portion of the visual field that causes some change in the firing rate.
Visual Pathways
On surround, off surround
Function:

Edge detection
Change in Illumination at edge of an object.
Increased activation on edge
Grandmother cells
Neurons for recognition of a specific person or thing, in this case, grandmother.
Simple cortical cells
Bar of light
Specific orientation
Specific retinal position
Complex cortical cells
Edges/motion
Lateral Inhibition
Lateral inhibition is when activity in one region tends to inhibit responding in adjacent areas.

Takes place at various levels of the visual system.

If a ganglion cell is strongly activated, neighboring cells will be inhibited.

Causes contrast effects
Brightness Contrast Effects
Kind of interaction in space.

The perceived brightness of a stimulus is affected by how bright adjacent areas are.

Could give rise to some visual illusions.

Caused by lateral inhibition
Sensory Adaptation
Kind of interactivity in time

Repeated stimulation of a particular receptor leads to reduced responding.

One example: Stabilized images on the retina fade away.
Afterimages
Kind of interactivity in time

If we view colored stimuli for an extended period of time, we will see an afterimage in a complementary color.

Perception affected by what came before.

Color afterimages:
Every hue (i.e., color) has a complementary hue.
When one stimulus receptor is exhausted, the other bounces back.
Opponent Process Theory of Color Vision
Herring proposed that we have two types of color opponent cells
red-green opponent cells
blue-yellow opponent cells

Repeated stimulation of a particular receptor leads to reduced responding.
Pattern Recognition
Pattern recognition is the process of matching organized sensory input to stored representations in memory.

Not just vision – sound patterns – tastes, etc.
Pandemonium Model
Palmer study of novel objects
We use gestalt principles for novel stimuli

Wehn shown separate parts of picture and asked if they were present in picture, easy to miss those that do not go according to gestalt principles. (e.g. participants may report not having seen a configuration of lines forming something that is not easily relatable or continued in picture)
Interactive Activation Model
Repetition Priming
Repetition priming: words that you have just seen are easier/faster to recognize.

This might work if it takes time for activation to DECAY. So, if you’ve seen a word recently, it might be somewhat higher in activation than one you haven’t seen for a while. So, it’ll need less input to become active
Phonemic Restoration Effect
Top-down effects in other senses.

Top down effect in auditory/language processing.

Phonemic restoration effect.

Replace part of a word (one phoneme) in a sentence with a cough.

“The state governors met with their respective legislatures convening in the capital city.”
Tip-of-the-Tongue States(TOT)
Sometimes have difficulty raising items from preconscious to conscious

TOT phenomena seem to occur in every language. People even use similar expressions to describe it.

Paradigms used to generate TOT states:
Show pictures of (semi) famous people (such as actors or politicians) and have participants name them.
Ask general knowledge questions to generate TOTs.
Give people dictionary definitions of obscure words and ask them what the word is.
Marcel's Priming Study
Facilitative Priming
Target stimuli (e.g., BUTTER) are processed faster if preceded by a related word (e.g., BREAD)

Negative Priming Effect
Target stimuli (e.g., PINE) is processed slower if preceded by a word related to target’s alternate meaning (PALM relating to hand)
Blindsight
Blindsight is another way in which stimuli we are not aware of can affect our behavior.

Person cannot consciously see a certain portion of their visual field but still behave in some instances as if they can see it.

Being aware of perceiving something is distinguishable from perceiving something.
Controlled vs. Automatic Processing
Automatic processing: Requires no conscious control

Controlled processing: Requires conscious control
Explanations for Automatization
Integrated components theory-Anderson
Practice leads to integration; less and less attention is needed

Instance Theory - Logan
Retrieve from memory specific answers, skipping the procedure; thus less attention is needed
Stroop Task
Interference effects between two tasks, one relatively automated and one that’s less automated.

We have difficulty selectively attending to a less automated task that competes with a more automated task - The more automated task wins.

Reading words vs. naming colors.
Visuospatial neglect
Neglect Theory 1: Disengage Deficit

Posner:
Left Neglect = difficulty disengaging attention from right side of space. Remember Attention is Limited and Selective. If you can’t disengage from the right side, you can’t engage the left side.

Neglect Theory 2: Unbalanced Competition

Hemispheres inhibit each other
Processing stimuli on one side, this inhibits processing of stimuli on the other side.

Damage to one hemisphere causes other hemisphere to dominate. Stimuli on right side of space get processed by left hemisphere & stimuli on left get processed by right. If right hemisphere is damaged, then it can’t compete and stimuli on left side get neglected

Treatment: Treatment consists of finding ways to bring the patient's attention toward the left.
Done incrementally, by going just a few degrees past midline, and progressing from there.
Sensory Synesthesia
People have unusual, and usually involuntary associations between different sensory modalities or representations.

People who report such experiences are known as synesthetes.

Different types of synesthesia depending on the combination of senses/representations.

One common type is color-grapheme synesthesia, letters or numbers are perceived as inherently colored.

In another type, numbers, days of the week and months of the year evoke different personalities.
Visual Memory Capacity
We can remember much more visual information than non-visual information.

Roger Shepard (1967)

Showed subjects pictures of 612 objects.

Then asked to judge whether they had seen those objects or not.

Shepard (1967): 612 pictures
Standing et al. (1970): 2,560 pictures
Standing (1973): 10,000 pictures
Cohen, Horowitz, and Wolfe (2009)
Visual Memory beats Auditory

Participants listened to just 64 sound clips.
Complex auditory scenes (people talking in pool hall).
Isolated sounds (a dog barking, birds chirping).
Music.

Tested on 64 (32 new/32 old) sound clips.
Auditory memory was worse than visual.
Only 78% correct vs. 98% in Shepard’s study (612 images).
When is Visual Memory Not Good?
For unimportant and unattended details
When stimuli lack meaning
When foils are similar (like the pennies)

Inattentional Blindness
Change Blindness
Good Recognition Involves:
Attention to details
Meaningfulness and relevancy of details
Distinctive alternatives
Richer Code
We have great memory for visual information because visual code is superior in the sense that is is more vivid, and detailed.

But in reality ...

memory for photos = memory for line drawings = memory for embellished line drawings

All 3 better than memory for verbal description
Dual Code Hypothesis
Paivio (1971)

Dual Code Hypothesis
When you see a visual object, you automatically create a verbal code in addition to visual code, and visual memory is formed based on both codes.

The reverse is not true; we use dual code only for visual things. Thus, visual memory can be exceptionally good.

Visual memory is good when BOTH codes are readily available.
Concrete words: easy to create verbal code

Visual memory is bad when either code is not readily available.
Abstract words: no visual code available
Similar foils (discriminating similar images): visual code is not distinctive from each other
Unimportant / unattended details: not much verbal code available
Stimuli lacking meaning: hard to create verbal code
Evidence

Concrete v.s. Abstract words

Concrete words: apple, car, elephant, church; Can be coded both verbally and non-verbally
Abstract words: deed, virtue, thought, peace; Can be coded verbally but not non-verbally
Concrete words are remembered better

Brook's Study (1968)
Brook's Study (1968)
One group saw a block diagram of a letter
Memorized it
Were asked to mentally travel the letter and indicate if the corner was on the extreme top or bottom

Second group saw a sentence
Memorized it
Were asked to classify each word as a noun by indicating "yes" or "no"
Verbal task

Participants were then asked to respond in one of two ways
Say “Yes” or “No”
Or Point to the answer “Yes or No”

For image task, RT was slower when pointing.
For the symbolic task, RT was slower for the verbal response.
Different pattern = different processing for different codes

Visual Codes are Processed Differently than Symbolic Codes:
Sequence matters more for words, not so much for unrelated images
Thus, each type of code is affected by different manipulations
Visual information interferes with spatial information
Verbal labels interfere with spoken words
Cognitive Maps
Cognitive Maps are internal representations of our physical environment centered on spatial relationships.

Tolman:
Maze learning in rats.
Run through a maze to get a reward.
Do rats just learn specific Stimulus-Response links (e.g., turn right here, turn left there) or are they forming a cognitive map?
Some internal mental representation of the space or terrain.
Tolman trained rats to enter circular area and then go to the arm directly across from the entrance.

Maze learning in rats.
Run through a maze to get a reward.
Do rats just learn specific Stimulus-Response links (e.g., turn right here, turn left there) or are they forming a cognitive map?
Some internal mental representation of the space or terrain.
Tolman trained rats to enter circular area and then go to the arm directly across from the entrance.

Hintzman: Direction Judgement
Accurate pointing to cities. Able to keep track of where things are when orientation changes.

Jonides: Distance estimates

Weaknesses
We use heuristics: quick and dirty “rule of thumb”
Often works, but not always.
Heuristics often help us remember things
Sometimes they impede our memory
Heuristics
Quick and dirty “rule of thumb”

Often works, but not always.
Heuristics often help us remember things
Sometimes they impede our memory

Symmetry
Right Angle Bias (rectilinearization)
Rotational bias
Alignment
Relative Position
Subjective Clusters - Conceptual knowledge (semantic memory) affects imagined representations of distances. Similar things judged to be closer than they are.
Density of landmarks - The number of landmarks you know/remember increases distance estimates.
Wollen, Weber, & Lowry study of images of word pairs
Interaction Helps

Bizarreness Does not
Segal and Fusella
Asked participants to imagine

Visual image (e.g., flower, car)
Or
Auditory image (e.g., phone ringing)

While imagining, they were told to indicate when they heard or saw something

Set of green lines
Auditory tone

Interference/ Facilitation of shared resources between perception and imagery:

Auditory/ Auditory - Interfere; More false alarms
Visual/ Visual - Interfere; More false alarms
Auditory/ Visual - Facilitate; Less false alarms
Visual/ Auditory - Facilitate; Less false alarms
Farah
Subject told to imagine a letter H or T
Then they were shown a faint H, faint T, or a blank screen &
Judge whether a letter was on the screen
Evidence that Images Are Not Like Pictures
Part-Whole example

Reed asked participants to imagine a 6-point Star of David
Two methods by which people imagine the star
Think of it as 2 overlapping triangles
Think of it as 6 small triangles on edges with hexagon in center
People faster to “imagine” if they reported thinking of it as 2 triangles compared to 6
Suggests possibly a more propositional representation being used to build an “image”
Reed found <50% of participants could “see” the parallelogram in their images
It is much easier with a picture parallelogram than with your imagery

Image Rotation

Slezak found that people typically had enough information to draw and recognize the objects when they were asked to draw the first picture from their memory then rotate it
But, they could not use imagery to rotate the picture and see the rotated objects as cat/rabbit/squirrel

Color Mixing (Pylyshn)

Imagine two blobs of color moving closer together
In your minds’ eye what is the color when they overlap
If light, it should be white
If pigment, it should be green
If filters, it should be black.
This is governed by explicit propositional knowledge, not your ability to imagine

Carmichael, Hogan, & Walters (1932)

Participants were shown simple figures with one of two verbal labels
Carmichael, Hogan, & Walter asked people to view pictures and draw them from memory.
However, different people were given different labels for the same pictures
Images drawn from memory were influenced by label.
Individual Differences in Imagery
Do people differ in how well they can form and manipulate images? That is, are your images better, more detailed, have more information than, my images?

Seem to be multiple kinds of abilities
How vivid (or detailed or exact) are your images

How much stuff can you keep track of and manipulate

Individual difference in vividness is classically measured via questionnaires (Marks, 1973)

The better the image, the greater the visual cortex involvement. The lower the vividness score, the more vivid the image.

Procficiency at classic “spatial ability tests”
is not related to to self-reported quality/vividness of images

Capacity (how many objects you can remember) and resolution go hand in hand

People who have better memory for exact locations and shades of color also remember more objects.
Lesions
Blindsight - visual pathway impaired; occipital lobe damaged

Visuospatial neglect - damage to right parietal lobe
Ipsilateral
One side affects the same side
Contralateral
One side affects the opposite side