Use LEFT and RIGHT arrow keys to navigate between flashcards;
Use UP and DOWN arrow keys to flip the card;
H to show hint;
A reads text to speech;
278 Cards in this Set
- Front
- Back
Keith Oatley
|
emotions are a judgment providing appraisal, focus and readiness for action. Analytical thinking takes to long- emotions point attention to what is important. Important cognitive function.
|
|
what are emotions- 2 theories
|
judgments and bodily reactions
|
|
emotions as judgments
|
"anger at someone that cuts you off is a judgment about how they have treated you and what that means to you (your safety, getting goals met, etc.)⁃ Oatley- goal accomplishment. emotions provide summary appraisal of problem-solving situation that contributes 2 things to subsequent thinking:⁃ focus ⁃ readiness for action ⁃ empathy provides analogical (analogy) understanding (I feel your pain)
|
|
emotions as bodily reactions
|
William James (1884), then Damasio ⁃ the body sends signals to the brain called somatic markers
|
|
Morris
|
2002- neural functioning shows that emotions depend on interaction between bodily signals and cognitive apprasals (2 way, both emotions as judgments and physical reactions combined.)
|
|
affect
|
(pronounced with emphasis on first syllable) emotion, mood and sometime motivation
|
|
Eliasmith
|
Approaching cognitive science from the strengths of symbolicism, connectionism, and dynamicism. Oversight of dynamic systems theory due to an over-relianceon the “mind as computer”metaphor
|
|
Descartes
|
Cogito ergo sum("I think, therefore I am")
|
|
Descartes Error
|
Emotion and reason are not separate but dependent upon one another.
|
|
Plato's view on emotions
|
distraction orimpediment to effective thought
|
|
emerging view in Cog Sci on emotions
|
emotions are inherent part of rational decision making
|
|
nature of emotions
|
goal satisfaction. s provide a summary appraisal(judgment) of your problem solving situation
|
|
understanding emotional states is required to understand
|
motivation
|
|
empathy
|
metaphore- mapping someone else's emotions onto yourself
|
|
somatic markers
|
emotions stored as memories, that guide decision making? "Body signals to the brain"- jose
|
|
emotions vs moods
|
emotions- responses to a particular situation, moods are longer lasting and not directed to a particular situation
|
|
what do emotions add to connectionist networks?
|
efficient way to guide
|
|
Fazio
|
People are faster to answer when the prime concept fits emotionally with the evaluative word • e.g. “cockroach”+ “disgusting”= lower RTs to “bad”than “chocolate”+ “disgusting” –Emotional evaluations are closely tied withthe representation of concepts and objects, propositions, rules, analogs and images
|
|
how are emotions represented in the brain
|
An emotion is a pattern of activation in a population of neurons with connections to both inferential and sensory brain areas
|
|
ITERA
|
(Intuitive Thinking in Environmental Risk Appraisal) ⁃ Network of determinants of responsibility such as agency, controllability of cause, motive of agent, knowledge of negative consequences.⁃ given inputs, network will predict emotional output. Sad if no one is at fault, anger if someone is at fault, shame if self is at fault. –Uses local units that are not like real neurons –Neglects division of the brain into functional areas
|
|
gray matter
|
neuron cell bodies Regions of the brain involved in muscle control, sensory perceptions, memory, emotions, speech...
|
|
white matter
|
neuronal tissue containing axons –Structures at the core of the brain such as the thalamusand hypothalamus. –Involved in • the relay of sensory information from the rest of the body to the cortex • the regulation of unconscious functions such as body temperature, heart rate and blood pressure. • the expression of emotions, the release of hormonesfrom the pituitary gland, and in the regulation of food and water intake.
|
|
dorsal
|
top
|
|
anterior
|
front
|
|
rostral
|
front
|
|
ventral
|
bottom, underside
|
|
caudial
|
posterior, or back side
|
|
lateral
|
side
|
|
medial
|
middle
|
|
midsagittal
|
cut lengthwise from top to bottom
|
|
coronal
|
cut widthwise, top to bottom
|
|
cerebral cortex
|
six layers, 3-5 mm depth,
|
|
levels of organization in the cns
|
1 m. cns, 10 cm system, 1 cm maps, 1mm network, 100 μm, 1Å molecule
|
|
levels of analysis in neuroscience
|
1. Molecule, 2, cellular, 3 systems, 4 behaviorial, 5 cognitive
|
|
molecular neuroscience
|
Study of the brain at the most elementary level (messengers that allow neurons to communicate with one another, etc)
|
|
cellular neuroscience
|
–Focus in how all the molecules work together to give the neuron its special properties. Questions- how many neurons, what kinds, how do they work, etc.
|
|
system neuroscience
|
focus on a circuit level
|
|
behavioral neuroscience
|
How do neural systems work together to produce integrated behaviors? • What neural systems account for gender specific behaviors? • Where in the brain do dreams come from?
|
|
cognitive neuroscience
|
Focus on the greatest challenge of neuroscience • i.e. how the activity of the brain creates the mind–Neural mechanisms of self awareness, mental imagery, language...
|
|
Neurotransmitters
|
molecules thatenable one neuron to influence another
|
|
how does caffeine work
|
– Adenosine inhibits firing rate, drowsiness – Caffeine blocks adenosine, increasing firing rate, also dopamine
|
|
how does alcohol affect you
|
– Glutamate excites neurons– Alcohol inhibits glutamate disrupts mental functioning also dopamine
|
|
Parkinson's Disease
|
Movement disorder that impairs motor skills and speech• Cause: Lack of dopamine • Symptoms – Muscle rigidity– Tremor – Bradykinesia slowing of physical movement– Akinesia loss of physical movement • But also cognitive disturbances – Slowed reaction time– Dementia hallucinations, delusions, paranoia – Short-term memory loss procedural memory imp
|
|
local vs global computation of neurotransmitters
|
local is excitatory or inhibitory on neural level, global is system impact, like hormones. –Interactions between the neurotransmitter control of hormone releaseand the hormonal regulation of neurotransmitter release
|
|
limitation in connectionist model
|
firing of neurons in dependent upon more than just the firing of other neurons- hormones have impact.
|
|
EEG
|
electrodes attached to scalp, good temporal resolution, low spatial resolution
|
|
single cell recording
|
intracellular patch clamp- cell body is ~20 micrometers
|
|
firing rate
|
number of spikes over a period of time
|
|
spike train
|
pattern of spikes over a period of time
|
|
Rate coding
|
–Averaging over time (e.g. every 100ms)•
|
|
Temporal coding
|
–No averaging over time
|
|
Population coding
|
–Averaging over space
|
|
Synchrony coding
|
–Patterns across space
|
|
Outer ear
|
–Pinna, Auditory canal
|
|
• Inner ear
|
–Choclea
|
|
Middle ear
|
–Tympanic membrane, –Ossiclesbones
|
|
cognitive science
|
the interdisciplinary studyof mind and intelligence, embracing philosophy, psychology, artificial intelligence, neuroscience, linguistics, and anthropology.
|
|
what are the goals of cognitive science
|
Understanding perceiving, thinking, remembering, language, learning, sensing and acting, and other mental phenomena.
|
|
what are the questions of cognitive science
|
e.g.: What's the difference between the "mind" and the "brain"?How can the mind be studied systematically? Is it possible for machines to think, learn, or reason?
|
|
examples of research in cog sci
|
Observing children, programming computers to do complex problem solving, studying the principles of neural circuitry in the brain, etc
|
|
aim of cog sci
|
describing and explaining every day human tasks, such as problem solving, decision making, learning.
|
|
mental representations
|
rules: “if I want to pass the class I need a C” –concepts: “bird”for an easy class, “geek”for an engineering student... –images: face recognition, navigation landmarks, etc –analogies: allow us to deal with new situations by adapting a similar familiar situation
|
|
mental procedures
|
operate on mental representations to produce thought and action
|
|
methods of cog sci
|
–Experimental psychology: Human subjects –Artificial Intelligence: Computational models –Linguistics: Grammar identification (Chomsky) –Neuroscience: Empirical observations + theory (computational) –Anthropology: Ethnography, i.e. living and interacting with members of a culture –Philosophy: The power of thought!
|
|
metaphors
|
use of language to understand and experience one kind of thing in terms of another.
|
|
dualism
|
philosophical view that the mind consists of two separate substances, soul and body.
|
|
Cog Sci history 1
|
ancient Greek philosophers used metaphor to understand mind, Descartes mind/body separate (dualism) and knowledge gained by thinking only (rationalism). Empiricism- Locke, Hume, Aristotle- knowledge gained by experience. Kant combined rationalism and empiricism (knowledge comes from both thought and experience). Wundt- experimental psychology to study the operation of the mind. Later, behaviorism took over- look only at observable stimuli and response- denies mind.
|
|
Cog Sci history 2
|
1956- Miller- memory limited to 7 items. Minsky, McCarthy, newell and Simon early AI. Chomsky mental grammars. Cog Sci name by Longuet Higgins. Lighthill report- AI winterConnectionist theories of mental representation and processing.
|
|
AI
|
Artificial Intelligence- “making a machine behave in ways that would be called intelligent if a human were so behaving”(McCarthy)
|
|
intersection vs. union
|
disciplines working together to validate results, but not merging into one.
|
|
study of the brain in cog sci
|
the understanding of neurobiological processes and phenomena
|
|
study of behavior in cog sci
|
the experimental methods and findings from the study of psychology, language, and the socioculturalenvironment
|
|
study of computation in cog sci
|
the powers and limits of various representations, coupled with studies of computational mechanisms
|
|
problem of cog sci being interdisciplinary
|
–Breadth vs.depth –Importance of having a strong foundation in a core discipline
|
|
cog sci central hypothesis
|
Thinking can best be understood in terms of representational structuresin the mind and computational proceduresthat operate on those structures (Thagard).
|
|
cognition
|
(from Latin cognoscere), “to know”
|
|
mind
|
refers to the collective aspects of intellect and consciousness which are manifest in some combination of thought, perception, emotion, will and imagination. –My definition: Ethereal entity that accounts for all phenomena that we can not empirically explain (yet!).
|
|
brain
|
control center of the central nervous system –Extremely complex structure, with more than ~100 billion neurons each of which is connected to ~10,000 others.
|
|
computation
|
physical process with states that represent states of another system and with transitions between states that amount to operations on the representations- information processing
|
|
Von-Neumann
|
Von-Neumann architecture, Turing machine physical implementation, input, output cpu, memory
|
|
representation in a cog sci context
|
is about the way people storevand process information –i.e. knowledge representation. A structure or activity that stands for something.
|
|
representation in a cognitive psychology context
|
internal symbolthat describe external reality –e.g. symbolic AI
|
|
CRUM hypothesis
|
thinking is performed by computations operating on representations
|
|
analogy
|
mental process that makes connections between relations in two sets of objects.
|
|
CRUM analogy
|
–Program • data structures + algorithms = running programs –Mind • mental representations + computational procedures= thinking
|
|
cognitive theory
|
Set of representational structures and set of process that operate in these structures
|
|
computational model
|
Interprets structures and processes by analogy with computer programs
|
|
3 main stages in process (computer models in cog sci)
|
discover, modification, evaluation
|
|
criteria for evaluating approaches to mental representations
|
(1) Representational Power (2) Computational Power (a) Problem solving (i) Planning (ii) Decision (iii) Explanation (b) Learning (c) Language (3) Psychological Plausibility (4) Neurological Plausibility (5) Practical Applicability (a) Education (b) Design (c) Intelligent systems (d) Mental illness
|
|
Echolocation
|
Ability to localize targets based on the acoustical information contained in the reflections of emitted sound pulses. • Bat biosonaris the most accurate airboneSonar system known
|
|
spatial and temporal cues: range
|
The delay of the echo encodes the range (distance) of the bat to the reflector.
|
|
spatial and temporal cues: azimuth angle
|
Encoded by differences in the arrival time, phase and amplitude between the two ears
|
|
spatial and temporal cues: elevation angle
|
Encoded by interference patterns of sound waves reflected within the structure of the pinna. Target elevation estimation relies on pinnae morphology.
|
|
amplitutude and frequency cues: shape
|
The amplitude of the echo can provide information about the shape and size of the reflector.
|
|
amplitutude and frequency cues: frequency patterns
|
The wing movements of insects modulate the frequency of the emitted call producing energy patterns along the frequency spectrum which can be used for detecting specific fluttering prey.
|
|
motion cues: relative speed
|
Moving targets such as flying insects produce a Doppler-shift in the echo’s carrier frequency which provides the relative velocity between the bat and the prey (target).
|
|
logic
|
(from Greek logos, “the word”) –Formal science that investigates the structure of statements and arguments through inference
|
|
inference
|
–Process of deriving a conclusion based on existing knowledge
|
|
formal logic
|
is the study of inference with purely formal content, where that content is made explicit. –First rules of formal logic
|
|
syllogisms
|
Logical argument in which one proposition (the conclusion) is inferred from two others (the premises) Premises: All rabbits like carrots Peter is a rabbit Conclusion: Peter likes carrots
|
|
syllogisms: deductive inference
|
–Conclusion follow from premises All rabbits like carrots Peter is a rabbit Peter likes carrots
|
|
syllogisms: inductive inference
|
Premises may predict a high probability of the conclusion, but do not ensure that the conclusion is true (i.e. introduce uncertainty) Teenagers are given many speeding tickets Therefore All teenagers speed
|
|
FOL
|
First order logic: FOL is a system of deduction extending propositional logic by the ability to express relations between individuals (e.g. people, numbers, and "things") more generally. Good at true/false but can't handle uncertainty or time.
|
|
&
|
FOL symbol for "and"
|
|
v
|
FOL symbol for "or"
|
|
→
|
FOL symbol for "if, then"
|
|
if
|
FOL antecedent
|
|
then
|
FOL consequent
|
|
~
|
FOL symbol for "not" (negation)
|
|
modus ponens
|
if p, then q: p, therefore q.
|
|
modus tollens
|
if p, then q: not q, therefore not p
|
|
denial of the antecedent (inverse error)
|
(invalid) If p, then q. Not p, therefore not q. If it rains, the sidewalk will be wet. It didn’t rain, so the sidewalk must we dry- actually, the sidewalk could be wet for another reason
|
|
affirm that consequent (converse error)
|
(invalid) if p then q. q therefore p. if it rains, the sidewalk will be wet. The sidewalk is wet, so it must have rained. The sidewalk could be wet for another reason.
|
|
representational power of analogies: analogy therapy
|
find out what analogies students are already using and correct if necessary
|
|
representational power of analogies: use multiple analogies
|
when one analogy breaks down, another can be added to provide understanding of what has been incompletely presented.
|
|
representational power of analogies: use deep analogies
|
instead of superficial feature comparison, the most powerful analogies use systematic causal relations that provide clear relevance to the students goals.
|
|
representational power of analogies: describe mismatches
|
Any analogy or metaphore is incomplete or misleading in some respects. Some instructors feel them too misleading to be useful, but the solution is to look at where they break down.
|
|
representational power of analogies: clear mappings
|
with a good analogy, the students should be able to figure out for themselves the mappings between source and target, but some guidance may be helpful.
|
|
representational power of analogies: familiar sources
|
there is no point if the source and target are both unfamiiar.
|
|
analogies: source
|
familiar component of analogy
|
|
analogies: source
|
new, unfamiliar component of analogy
|
|
Finke’s unifying principles of mental imagery
|
–Implicit encoding • Imagery is useful for retrieving information about objects that was not explicitly encoded –Perceptual equivalence • Similar mechanisms in visual system activated when objects or events are imagined as when they are perceived –Spatial equivalence • Spatial relations between object are preserved–Structural equivalence • Image structure corresponds to perceived object–Transformational equivalence• Imagined and physical transformations exhibit similar dynamic characteristics and laws of motion
|
|
Glasgow & Papadia’sscheme on computational imagery:
|
– 3 representations for 3 kinds of processing – Deep representation • Stored in long-term memory • Hierarchical, descriptive representation with all relevant information about the image – Spatial representation • Symbolical representation of image components • Stored in short-term memory– Visual representation • Shape, relative distance, relative size... • Stored in short-term memory
|
|
Long-term memory
|
Memory stored as meaning • Lasts from 30s to lifetime
|
|
Working memory
|
Structures and processes used for temporarily storing and manipulating information. • Lasts <= 30 seconds
|
|
What is a symbolic array
|
Multidimensional, hierarchical representation of images in which spatial relationsare made explicit. Glass of water as |air|glass|water|glass|air
|
|
spatial representations vs. propositional knowledge
|
spatial representations are more efficient (array data structures)- name all countries north of france by using calculations (P, Q, etc) or look at a map.
|
|
rationalism in terms of concepts
|
Plato, Descartes, Leibniz...Knowledge innately gained, or by thinking and reasoning • Most important concepts alreadyin the mind!
|
|
empiricism in terms of concepts
|
(Aristotle, Locke, Hume...) – Concepts are learned through sensory experience • e.g. concept of dogthrough experiencing several examples of dogs
|
|
define script
|
Frame-like structures representing a stereotyped sequence of events in a particular context. Example: eating in a restaurant- you know what to expect. Problem: Script match. John visited his favorite restaurant on the way to the concert. He was pleased by the bill because he liked Mozart. which script to call- restaurant or concert.
|
|
Script Components:
|
–Entry conditions –Results –Props –Roles –Scenes
|
|
Script Entry conditions
|
Descriptors of the world that must be true to call the script –Restaurant open –Customer hungry
|
|
Script Results
|
Facts that are true when script ends –Customer is full –Owner has more $
|
|
Script props
|
“things”that support script content –Tables –Waiters –Menus...
|
|
Script roles
|
• Actions performed –Waiter takes orders, delivers food, brings the bill...
|
|
Script Scenes
|
• Temporal aspects of the script –Entering –Ordering –Eating...
|
|
long term memory
|
stored from 30 seconds to eternity- mother's name
|
|
short term memory
|
held in hippocampus, memroy that is not kept longer than 30 seconds without reinforcement. Words as you read them are an example.
|
|
deductive inference
|
Conclusion follow from premises -All rabbits like carrotsPeter is a rabbit, -Peter likes carrots
|
|
inductive inference
|
Premises may predict a high probability of the conclusion, but do not ensure that the conclusion is true (i.e. introduce uncertainty) Teenagers are given many speeding tickets. Therefore All teenagers speed
|
|
modus ponens
|
Affirming the antecedent, if p, then q. P therefore q. If it rains, the sidewalk will be wet. It rained, so the sidewalk must be wet.
|
|
modus tollens
|
Denying the consequent, if p, then q. Not q, therefore not p. If it rains, the sidewalk will be wet. The sidewalk isn't wet, so it didn’t rain.
|
|
denial of the antecedent (inverse error)
|
(invalid) If p, then q. Not p, therefore not q. If it rains, the sidewalk will be wet. It didn’t rain, so the sidewalk must we dry- actually, the sidewalk could be wet for another reason
|
|
affirm that consequent (converse error)
|
(invalid) if p then q. q therefore p. if it rains, the sidewalk will be wet. The sidewalk is wet, so it must have rained. The sidewalk could be wet for another reason.
|
|
five computational procedures for images
|
1. Inspect- looking at something. (fork to left of plate, knife to right of plate- is for to left or right of knife.) 2. Find- where do you keep your flashlight at home- some people visualize their homes then search. 3. Zoom- does a frog have a tail? Some people visualize and zoom in to check. 4. Rotate. See a letter e on it's back- to do this we image an e then turn it in our minds. 5. Transform- imagine a B, turn it on its back and put a V under it. Remove the line and you have a heart.
|
|
prototype
|
A set of typical conditions, to that the prototype for dog is [furry, has 4 legs, barks, etc.]. In classical view (before prototype), it's a matter of checking whether defining conditions apply. In prototype, it's a looser process of seeing whether the typical conditions match the subjects characteristics.
|
|
true or false: In order to understan an alanogy and be able to make inferences, one needs sufficient understanding of the target domain.
|
False. One needs an understanding of the source to apply the analogy.
|
|
What are two sources of evidence for the existence of the basic level categorization?
|
they are morphologically simple, children aquire them first, and there is fast recognition at that level. A source of evidence for superordinate categories would be that it isn't possible to make a single mental image.
|
|
analogies are similar to what form of reasoning and why
|
induction- one estimates that if two concepts have mulitiple things in common, they may have more in common. It's based on incomplete information
|
|
Representational power of rules: what are 3 different kinds of information that rules can represent?
|
–Generalities about the world • IF X is non native English THEN X has an accent–How to do things in the world • IF you come to work early THEN you can park–Linguistic regularities • IF a sentence has a plural subject, THEN it has a plural verb–Rules of inference, e.g. modus ponens in a single rule • IF you have an if-then rule, and the first part is trueTHEN the then part will be true too
|
|
mulitple inheritance problem in frames
|
does opus live on the north pole or in the funnies? Is he a penguin or a cartoon character. Solution: add a new class to remove ambiguity. Subclass has 2 components- one an animal, the other a cartoon character.
|
|
what are three problems with the classical view of categories? Give an example of each. How did Rosch's protype theory address each of these problems.
|
some evidence for basic level categories is that they are morphologically simple, children aquire them first, and thereis fast recognition at that level. A source of evidence for superordinate categories would be that it isn't possible to make a single mental image.
|
|
induction
|
all dachsunds are small, all dachsunds are dogs, therefore all dogs are smalll- invalid.
|
|
abduction
|
all dogs are hairy, all cats are hairy, therefore all cats are dogs. Invalid
|
|
watsons selecton task-
|
if a card has an A on one side, it has a four on the other. Four cards are shown: A, B, 4, 7. How many and which cards to turn to disprove. Works when using "in a bar" "not in a bar" "21", and "18". People use pragmatic reasoning schemas
|
|
mental models
|
mental representations that people use instead of formal logic. Johnson Laird- why we don't do well on Watsons card trick when symbols instead of situations are used.
|
|
–Universal / affirmative
|
• AllX are Y
|
|
–Particular / affirmative
|
• SomeX are Y
|
|
–Universal / negative
|
• NoX are Y
|
|
–Particular / negative
|
• SomeX are notY
|
|
combinatorial explosion
|
the effect of functions that grow very rapidly as a result of combinatorial considerations.
|
|
heuristics
|
any algorithm that gives up finding the optimal solution for an improvement in run time or it can be a function that estimates the cost of the cheapest path from one node to another.
|
|
SOAR
|
Newell, 1990- all cognitive acts are some form of search task
|
|
chunking
|
represents the conversion of problem-solving acts into long-term memory • e.g. IF you want to get from campus to home, THEN drive. You don't have to recalculate every time you figure out something that is similar to something you've already figured out.
|
|
What kind of information can rules represent?
|
Generalities about the world • IF X is non native English THEN X has an accent.–How to do things in the world • IF you come to work early THEN you can park –Linguistic regularities • IF a sentence has a plural subject, THEN it has a plural verb –Rules of inference, e.g. modus ponens in a single rule • IF you have an if-then rule, and the first part is true THEN the then part will be true too
|
|
rules vs. logic
|
Increased computational power and psychological plausibility at the cost of loosing FOL’srigor and elegance
|
|
learning: rules can be:
|
–Innate • biological circuitry, e.g. vision –Learned by inductive generalization • formed from examples • formed by chunking(SOAR) or composition(ACT) –Learned by specialization • specific for a given situation –Learned by abduction • rules run backward to provide explanation –Learned by their performance • Incremental learning through associated usefulness value
|
|
rules summary
|
Rules: natural way of describing human knowledge • Rule-based systems –Had psychological aims from the beginning • as opposed to logic-based models –Abandon FOL‘sexpressiveness for simple if-then rules –Computational advantages • concise and independent representations –Psychological plausibility • strongest among all CRUM based approaches • FOL vs. rules –It is apparent that logic is not the way to go in cognitive science because of the lack of core ideas about representation and computation
|
|
rationalism
|
(Plato, Descartes, Leibniz...) – Knowledge innately gained, or by thinking and reasoning • Most important concepts alreadyin the mind!
|
|
empiricism
|
(Aristotle, Locke, Hume...) – Concepts are learned through sensory experience • e.g. concept of dog through experiencing several examples of dogs
|
|
concepts
|
Mental representations of stereotypical situations or entities (objects) in a given category.
|
|
frames
|
"Data-structure for representing a stereotyped situation –e.g. being in a certain kind of living room, or going to a child's birthday party. Frames represent knowledge as structured objects with named slots that have
|
|
frame issues
|
is opus a cartoon or a bird? Fits both frames- solution- is a subclass of both
|
|
concept learning can be:
|
Innate: basic concepts + mechanisms to form new ones –“hardwired”to recognize faces, properties of physical objects... • Formed from examples –Sample sizeneeded for generalization varies –Plasticity (tuning) from further examples, i.e. online adaptation • Formed from other concepts –Powerful “tool”of the mind! –Typical mechanism: combination of concepts –Harder cases: when abduction (use of hypothesis) is required »No computer models yet (too difficult to model) »e.g. Blind lawyer
|
|
analogies:
|
single instance induction
|
|
analogies in AI
|
case-based reasoning
|
|
analogies: how to distinguish valid inferences from superficial ones
|
Find the causal relations that give the right and relevant (to the goal) outcome • i.e. analogy representation requires causality
|
|
framework for analogical reasoning
|
Retrieval Establishing the initial elements of analogical mapping – Given target problem, select potential source analog – Select featuresof target and source that increase the likelihood of retrieving a useful source analog 2. Elaboration Derivation of additional features of the source (if needed)3. Mapping and inference Map source attributes into target domain4. Justification Mapping valid? If not, go back to (3) for modification5. Learning(adaptation) Acquired knowledge stored and ready to be use in the future. Main points of the framework: retrieval (remembering), comparison, adaptation.
|
|
3 kinds of learning in analogical reasoning
|
1. Storage of cases based on experience 2. Adaptation of previous case to solve new one 3. Generalization(“frame on top of an analogy”) – Extract pattern from source and target and form “analogical schema”– e.g. abstracted schema for registration – Based on analogical reasoning of registering for a course this year based on how I did register last year – Analogical schemas are like specific concepts
|
|
6 points for use of analogy
|
1. Use familiar sources(e.g. atoms/solar system BAD!) 2. Make mapping clear(e.g. mind/computer) 3. Use deep analogies(helps towards student goal) 4. Describe the mismatches (perfect analogies don’t exist!) 5. Use multiple analogies (to compensate for mismatches) 6. Perform “analogy therapy”(check and correct students)
|
|
vision
|
is the process that produces from images of the external world a description that is useful to the viewer and not cluttered with irrelevant information (David Marr).
|
|
visual perception
|
is theend product of vision • Consists of 1. Detecting light 2. Interpreting (seeing) the consequences of light stimulus• The visual system is composed of – Optical system – Perceptual system
|
|
Finke’s unifying principles of mental imagery
|
–Implicit encoding • Imagery is useful for retrieving information about objects that was not explicitly encoded –Perceptual equivalence • Similar mechanisms in visual system activated when objects or events are imagined as when they are perceived –Spatial equivalence • Spatial relations between object are preserved–Structural equivalence • Image structure corresponds to perceived object–Transformational equivalence • Imagined and physical transformations exhibit similar dynamic characteristics and laws of motion
|
|
Glasgow & Papadia’sscheme on computational imagery: deep representation
|
• Stored in long-term memory • Hierarchical, descriptive representation with all relevant information about the image
|
|
Glasgow & Papadia’sscheme on computational imagery: spatial representation
|
• Symbolical representation of image components • Stored in short-term memory
|
|
Glasgow & Papadia’sscheme on computational imagery: visual representation
|
• Shape, relative distance, relative size... • Stored in short-term memory
|
|
Representations for computational imagery –Long-term memory
|
• Memory stored as meaning • Lasts from 30s to lifetime
|
|
Representations for computational imagery –working memory
|
• Structures and processes used for temporarily storing and manipulating information. • Lasts <= 30 seconds
|
|
What is a symbolic array?
|
– Multidimensional, hierarchical representation of images in which spatial relations are made explicit – e.g. glass of water
|
|
Hemianopsia:
|
Hemianopsia: absence of vision in half of a visual field
|
|
hypercolumn
|
( neurophysiological evidence of low-level vision) the basic computational unit of the visual system (Hubel and Wiesel, 1962) • Block of cells that process a small patch of the input image • Orientation slabs process edges and bars in 10deg steps • Adjacent hypercolumnsprocess adjacent areas on the retinal image
|
|
left and right brain lesions
|
right- only see smaller components, left- only see bigger end result (when small letters are used to make a large one)
|
|
Connectionism
|
Style of modeling based upon networks of interconnected simple processing units
|
|
parallel constraint satisfaction
|
necker cube
|
|
levels of organization in the nervous system
|
CNS, Systems (digestive, etc.), Maps, networks, neurons, synapses, molecules
|
|
neurons
|
are electrically excitable cells in the nervous system that function to process and transmit information
|
|
action potential
|
The action potential is a wave of depolarization (electrical discharge) traveling down the axon
|
|
local representation in neuro systems and connectionist systems
|
–Information localized in highly specialized neurons • “Grandmother neuron”, i.e. a neuron that fires only when the observer sees his grandmother!
|
|
sparse representations of neural models
|
–Information is represented by a few neurons within a largely populated neuronal network. –Somewhere in between local and distributed representations • The optimal neural code?–Computational models of how visual cortex processes natural images strongly suggests sparse coding.
|
|
perceptron history
|
Initial excitation: Rosenblatt (1957) –Simplest kind of feedforwardneural network: a linear classifier –Strong interest in modeling how neural networks might contribute to thought • Dark winter: Minsky& Papert(1969) –Single-layer perceptroncan not learn XOR function–AI & Psych shifted to rule and concept-based systems • 1980s: ANN renaissance –Connectionism is born (Hinton & Anderson (1981); Rumelhart& McClelland (1986)) • Multi-layer perceptron
|
|
basic elements of a connectionist network
|
–A set of processing units –A state of activation –An output function –Connectivity patternamong units –Activation rule –Learning rule –EnvironmentOutput
|
|
what do ANNs learn
|
Neural networks try to learn the decision boundary which minimizes the empirical error
|
|
what is a decision boundary
|
In a classification problem with nclasses, the decision boundary is the hyperplanethat partitions the underlying vector space into nsets, one for each class
|
|
supervised learning
|
–Choose a cost function. -Compute desired outputs from given inputs–Criterion: Similarity between desired and computed outputs –Applications: Classification, prediction
|
|
unsupervised learning
|
–Find structure in given inputs–Criterion: Ability to predict where new inputs are likely to be –Applications: Clustering, outlier detection
|
|
reinforcement learning
|
–Select actions to maximize rewards • Requires a task and a policy–Criterion: Reward accumulated over time –Applications: Mobile robots, making plans
|
|
back propogation
|
Supervised learning algorithm – “Backwards propagation of errors”: the errors propagate backwards from the output nodes to the inner nodes.• Used in feed-forward networks • Requires differentiable transfer function • Algorithm: gradient descent
|
|
Hebb's postulate
|
“When an axon of cell A is near enough to excite a cell B and repeatedly or persistently takes part in firing it, some growth process or metabolic change takes place in one or both cells such that A's efficiency, as one of the cells firing B, is increased.” fire together, wire together.
|
|
Hopfield network
|
Performs like an associative memory –content addressable memory–The network will converge to a "remembered" state if it is given only part of the state. • All connections are symmetric (wij = wji)• Binary threshold units • Network dynamics convergence is guaranteed –i.e. local minima• Activation rule. Postal service handwriting recognition.
|
|
graceful degradation
|
• Aka fault-tolerance –“A chain is a strong as its weakest link”• Property that enables a system to continue operating properly in the event of the failure of some of its components. • Each unit participates in the storage of many patterns, and each pattern involves many units The loss of a few components will degrade but not loosethe information
|
|
neurological plausability of natural neural networks
|
–Billions of neurons, trillions of connections –Each neuron has dozens of neurotransmitters • Brain as electro-chemical machine–Pre-and post synaptic plasticity –Single neurons do not have local representations • e.g. conceptual/propositional interpretations –Except for grandmother neuron!–Synapses are one-way (vs. symmetric links in ANNs) –Connections with other neurons are either excitatoryor inhibitory, but not a mixture –Networks are considerably larger than current ANNs
|
|
perceptron learning rule, supervised learning algorithym
|
Given input X1, Xi; Perceptron produces output Y We are told correct output O If Y= wrong answer, change wi. Otherwise do nothing . Backwards propagation of errors”: the errors propagate backwards from the output nodes to the inner nodes. • Used in feed-forward networks. Requires segmoidal activation function.
|
|
gradient descent
|
Present a training sample to the neural network• Compare the network's output to the desired output. Calculate the error in each output neuron. • For each neuron, calculate what the output should have been, and a scaling factor, how much lower or higher the output must be adjusted to match the desired output. This is the local error. • Adjust the weights of each neuron to lower the local error. • Assign "blame" for the local error to neurons at the previous level, giving greater responsibility to neurons connected by stronger weights. •Repeat the steps above on the neurons at the previous level, using each one's "blame" as its error.
|
|
Hebbian learning based on 2 physiological principles
|
1. the existence and properties of continuous cerebral activity 2. the nature of synaptic transmission in the central nervous system
|
|
Turing Test
|
–A human judge engages in a natural language conversation with two other parties, one a human and the other a machine. –If the judge cannot reliably tell which is which, then the machine is said to pass the test. –It is assumed that both the human and the machine try to appear human.
|
|
herbert
|
soda can collector robot
|
|
ghengis
|
insect like robot 6 legged walking robot• Follows humans (through heat sensors) • Behaviors (Sub. Arch. layers) 0 Standup1 Simple walk 2 Force balancing 3 Left lifting 4 Whiskers5 Pitch stabilization 6 Prowling7 Steered prowling
|
|
hemi spacial neglect.
|
damage to right part of brain for right handed people. Lacks awareness of the left side of the world. They can see.
|
|
anosagnosia
|
denial that a problem exits.
|
|
Navonne figures
|
local or gloabla neglect- where a letter is made of smaller letters, patients only see the smaller or the larger. Timing is critical- gaps between instances impact results.
|
|
dynamics-
|
time is an issue- world is not static.
|
|
face transition-
|
snap awake. rapid transition. 3d plane showing multiple points
|
|
attractors:
|
point which attracts you to a state; hunger for eating, clap awake.
|
|
somatic markers-
|
communication from the body to the brain in the form of emotions.
|
|
WYSIWYG-
|
naive realism. size of head.
|
|
Kismet
|
affective learning
|
|
collective intelligence
|
termites use phemomones –No direct communication between individuals –Information communicated through the state or changes in the local environment –Individuals decides action based on a simple algorithm –Critical number of individuals needed for intelligent behavior to arise
|
|
Thagards claim against CRUM
|
–CRUM focuses too much on mental representations–CRUM neglects the body and the world –“Human intelligence is a matter of bodies inhabiting physical environments, and operating in them in ways that are not at all like how a computer processes information.” –CRUM needs to be expanded/supplemented to account for how thinking depends on interactions with the world –Thagard suggests expanding CRUM through dynamic systems
|
|
four fields that claim thinking is not just in the head
|
• Psychology Gibson• Linguistics Lakoff • Philosophy Heidegger, Searle• AI/Robotics Brooks
|
|
Gibson quote
|
We perceive in order to operate on the environment: Perception is not about inferences, it is “designed”for action. Affordances
|
|
affordances
|
Gibson called the perceivable possibilities for action affordances • Affordanceis an action that individuals can potentially perform in their environment. –e.g. surfaces for walking, handles for pulling, space for navigation, tools for manipulating, etc. • Main idea: Our whole evolution has been geared toward perceiving useful possibilities for action.
|
|
bodies play a crucial role in our thinking
|
(Lakoffand Johnson, 1999) –Metaphors derived from body-based relations • up & down, left & right, in & out...
|
|
main point in problemwith CRUM
|
is incapable of appreciating the contextual waysin which people deal with the world
|
|
intentionality
|
aka “aboutness”, the property of being about something Mental states are intended to represent the world, they posses intentionality • According to John Searle, machines will never achieve intentionality –We are not computers as stated by Alan Turing• Computers are just syntactic engines, they lack semantics!
|
|
what are dynamic systems (memorize this one!)
|
Systems whose changes over time can be described by mathematical equations – Current state of the system depends on previous values
|
|
dynamic systems: state variable
|
Variable used to describe & measure the system
|
|
dynamic systems: state space
|
– Point in state space = combination of values of all state variables – Changes in the system described as movement from one point in space to another
|
|
dynamic systems: attractors
|
stable states
|
|
dynamic systems: phase transitions
|
changes from one attractor to another, e.g. abrupt weather change• Examples in economics, physics... – The weather, the number of fish in a lake in Spring, the swinging of a clock pendulum...
|
|
why do people have stable but unpredictable patterns of behavior?
|
–Human thoughtisdescribed by a set of variables–These variables are governed by a set ofnonlinear equations–These equations establish a state space that has attractors–The systemdescribed by the equations is chaotic –Attractors explain stable patterns of behavior–Multiple attractors explain abrupt phase transitions –The chaotic natureof the system explainswhy behavior is unpredictable
|
|
CRUM main criticism
|
our minds don’t have to represent the world because they are situated in it
|
|
connectionism
|
Mind as brain” –Functioning of mind is like functioning of brain • The mind is the brain:
|
|
dynamicism
|
mind as the watt governor (how to control rotational speed of shaft) stability retrieved through feedback control.
|
|
behaviorism
|
looking inside the “black box” is forbidden! –Point of agreement between philosophical and psychological views of behaviorism: • Internal representations, states and structures are irrelevant for understanding the behavior of cognitive systems–Skinner(psychologist): only input/output relations are scientifically accessible –Ryle(philosopher): mental predicates, if they were to be consistent with natural science, must be analyzed in terms of behavioral predicates
|
|
dynamicist hypothesis
|
Natural cognitive systems are certain kinds of dynamical systemsand are best understood from the perspective of dynamics. • i.e. cognition is best understood as complex, dynamical interactions of the cognizerwith the environment
|
|
dynamical models must be
|
–Deterministic –Described with respect to time –Low dimensionality –Intimately linked (coupling) • DS theory describes the world with geometrical concepts –State space, path, trajectory, topology, attractor... • e.g. attractor: point in the state space towards which a trajectory will tend when in the neighborhood
|
|
determinism (in dynamicism)
|
–Behavior will be repeated if initial conditions are the same
|
|
time in dynamicism
|
–Dynamicists are critic with other approaches leaving time “out of the picture” –Brain changes continuallyas we interact with the environment –There are no representations, but state space evolutions
|
|
models in dynamicism
|
is a precise description of the properties of the system(being modeled) • A model is a precisely constrained analogy in which each element of the source is explicitly represented by some particular aspect of the model
|
|
dimentionality in dynamicism
|
–Number of parameters in the model –Claim: complex cognitive behavior should be modeled via low-dimensional models –How do they do it?
|
|
What’s the difference between deductive and inductive reasoning?
|
"Deductive reasoning: conclusions that follow with certainty from premises. Inductive reasoning: conclusions that follow probabilistically from premises
|
|
What level is the entity “rocking chair”? Justify your reasoning why it belongs in the level, and why it’s not in the other levels.
|
Subordinate more specificnot basic because not morphological simple, children don’t acquire first, not readily recognizable in experimental tasks…
|
|
What is the Universal Turing Machine?
|
a theoretical machine that can compute anything
|
|
What evidence do we have that metaphors influence the way we think—so strongly that we say “metaphors we live by”. Give an example.
|
1. Real World sayings. “Time is money: wasting time, spend time…” Embodiment: Neural Binding
|
|
What is the “permission schema” and how does it relate to how humans use logic?
|
When people are in the concept or way of thinking about a situation as a matter of permission, especially in the positioning of granting permission, they can reason more effectively through Wason Selection Task using modus tollens and ponens correctly.
|
|
What is BOLD and what type of neuroimaging technique is it used in?
|
Blood Oxygen Level Dependent and fMRI (functional magnetic resonance imaging)
|
|
What is synesthesia? What’s one experiment where you could show this?
|
Multimodal perceptual experiences from a unimodal sensory experience. - random dot stimuli, motion defined stimuli, stroop intereference
|
|
What is Wernicke’s aphasia? What kind of speech does a type of these aphasiacs have? How is this different from psychosis?
|
Speech is preserved, but language content is incorrect. a type of aphasia often (but not always) caused by neurological damage to Wernicke’s area in the brain- Psychosis still has syntax. - Ex: I called my mother on the television and did not understand the door. It was too breakfast, but they came from far to near. My mother is not too old for me to be young.
|
|
What is hemispatial neglect?
|
After damage to one hemisphere of the brain, a deficit in attention to the opposite side of space is observed.
|
|
What’s one common features of all elite performers of all disciplines? What’s one possible explanation this enhances their talents.
|
A nap. The sleep helps to consolidate their memory.
|
|
What is spontaneous generalization?
|
When one node is activated, and a stereotype or a description is highlighted that gives an impression of what for example a jet is
|
|
What is savant syndrome?
|
Savant Syndrome describes a person having both a severe developmental or mental handicap and extraordinary mental abilities not found in most people
|
|
What is the nature of emotions?
|
Goal Satisfaction (we get angry because our goals are frustrated).
|
|
MRI makes use of the fact that 80% of your body is
|
___(hydrogen atoms)___
|
|
Name 1 pro and 1 con for PET scans.
|
Pro = shows chemicals in brain, good special resolution (1cm) Con = invasive (radioactive), expensive, slow
|
|
What does WYSWYG stand for? Name one piece of evidence that supports or refutes this theory.
|
Support = ?? Refute = visual illusions
|
|
What do the Polgár sisters tell us about nature vs nurture? What is missing from this story that might make it more of a controlled experiment?
|
(nurture sufficient for chess experts, not controlled, need child from other family/genes and not teach/send away one of his children)
|
|
What is “time locking” and with which cognitive neuroscience methodology is it used?
|
Time locking = lining up the onset of stimulus across diff. ss to measure shared ERP, method = EEG
|
|
TERMSWhat is monism? What is an alternative to monism?
|
(monism = 1 type of mental stuff, alternative = dualism)
|
|
TERMSWhat is Parallel Constraint Satisfaction? Give an example.
|
PCS = solving multiple problems/fitting multiple constraints at one time Ex = necker cube network
|
|
TERMSWhat is the difference between situated AI and embodied AI?
|
Situated = sense current surroundings, avoids use of abstractions for representations Embodied = robots must by physical, and experience the world directly and not through simulation
|
|
NOT CRUMWhat is an activation function? What are the two types of activation functions we have discussed as a class?
|
Activation function = function to change state of node in ANN. Sums inputs to node, transforms by some function, they changes node state In class = step, sigmoidal
|
|
NOT CRUMWhat are two application of embodied AI?
|
Education and entertainment
|
|
NOT CRUMWhat is subsumption architecture? And what is its main problem?
|
Design for robots/AI, no central representation, no explicit goals, multiple layers running in parallel, purpose of creature implicit in higher level layers. Main prob = scalability
|
|
NOT CRUMWhat is the Holy Grail for Cognitive Dynamists?
|
To find the equations that specify how the mind changes over time
|
|
NOT CRUMWhat is state space? What are phase transitions? How do these relate to Dynamic Systems (DS)?
|
State space = point in space, a vector combination of all state values, Phase transitions = abrupt changes from one attractor to another Parts/characteristics of a DS. How we characterize and describe DS.
|
|
NOT CRUMEmbodiment. Define and give three supporters (and their fields) from three different fields of study.
|
Embodiment is direct experience and knowledge through the body. Thinking is not just in the head.Gibson Psychology Gibson -Psychology, Lakoff - Linguistics, Searle, Heidegger - philosophy, Brooks, Smith - AI/robotics
|
|
Nature vs. Nurture The basic argument/debate
|
|