• Shuffle
    Toggle On
    Toggle Off
  • Alphabetize
    Toggle On
    Toggle Off
  • Front First
    Toggle On
    Toggle Off
  • Both Sides
    Toggle On
    Toggle Off
  • Read
    Toggle On
    Toggle Off
Reading...
Front

Card Range To Study

through

image

Play button

image

Play button

image

Progress

1/64

Click to flip

Use LEFT and RIGHT arrow keys to navigate between flashcards;

Use UP and DOWN arrow keys to flip the card;

H to show hint;

A reads text to speech;

64 Cards in this Set

  • Front
  • Back
good properties of a definition (necessary and sufficient)
necessary: everything that is a cat matches the def'n
sufficient: everything that matches the def'n is a cat
the classical view of concepts
logical definitions
Problems for the classical view
- good def'ns are hard to find
- borderline cases: is a lamp or rug furniture?

EX: bachelor = unmarried adult male, except there are many examples of men who match this def'n but who we don't consider to be bachelors (e.g. priest, life partner, 17 y.o.)
typicality effects
typicality influences reaction time, generalization.

EX: something that looks a little like a bird that we have never seen before. it's hard for us to say if it's a bird instantly
prototype and exemplar theories
concepts have a prototype, typicality is a function of similarity to the prototype. but how do we know info about a prototype's variability?

exemplar theory: we store every instance that we see in memory

-> exemplar theory supports prototype theory
The Wason task, effects of context, and possible explanations
logic puzzle that tests human intuition; asks testtaker to identify which cards must be turned over to prove a certain rule. many failed. however, if we ask them to prove a rule that they know better (EX: if someone is drinking, they are over 21), it's an easy puzzle
The basic idea behind the heuristics and biases approach (a question regarding the relationship between them)
if people use heuristics to solve a problem, what biases influence their decisions?
The representativeness heuristic and how it can lead people astray
- if process A is similar to event B, where process A yields event B, then the probabilities between them are judged to be high

- if we can generate more examples of A from memory, A will appear to be more likely
main points about human reasoning
we act in ways other than according to probability theory. what we remember is not simply based on frequency

humans don't seem to categorize things using rules, and human reasoning is not in perfect accordance with probability theory
the idea behind connectionism
theory of how thought works uses spreading activation in an artificial neuron network
how ANNs work
uses a network of many concepts, analogous to neurons since there are so many of them. hypothesis is that nodes work similarly to neurons (firing above a certain threshold)
localist vs. distributed representations
localist: one node per concept
distributed: a stream of activated nodes per concept
properties of distributed representations
we can plot these representations on a 2D plane and find a line that indicates a boundary of recognizing
noise, interpolation (70% dog)

—> explains how humans can hear and identify things in loud situations, and how if something sounds like a dog and a cat, we can guess what percent "dog" it is. it's not just "dog" or "cat".
associative learning / Hebb rule
neurons that fire together wire together. activations that happen often will continue to happen because weights between them become stronger each time.
kinds of learning problems: supervised and unsupervised
unsupervised: we learn by picking up on statistical regularities. like classical conditioning

supervised: a "teacher" tells us if something is right or wrong, FEEDBACK. like operant conditioning
idea behind error-driven learning (hillclimbing, gradient descent)
recalcuate weights so that error is minimized
if we have a differentiable function of error, we can find minima. these points are where error is lowest.
problems with hillclimbing algorithms
we might find many local minima when we're acutally looking for the absolute minimum. because of this, we might be making a mistake but think it's correct since we're at minimum error
perceptrons and their problems
feed-forward neural network involving input/output layers
- by this theory, we can only learn linearly separable categories (no XOR function)
e.g. either dog or cat
basic idea behind backpropogation
we compute the error, use gradient descent, propagate error to a hidden layer, use gradient descent again
- no cycles or feedback, only forward
arguments for and against connectionism
distributed representations are key to understanding thought
- no one is really sure where representations come from in the first place

artificial neural networks are attractive compared to logic — still not really sure how neurons work though

explains "fuzziness" that humans have in reasoning — but symbols have gotten us very far

the idea that there is a hidden layer is telling of how much we know

similar to Nativist vs. Empiricist
how AI fits into cogsci (linguistics, psychology, neuro)
- allows us to explore principles that make the mind work
- we can test our theories
analysis by synthesis
in many sciences we break things down into the most elemntary pieces. but when we don't know enough about the mind to begin with, it's more helpful to start from scratch and see what elementary pieces are necessary to help our theory develop
the Turing test and its flaws
test does not necessarily measure intelligence, but rather ability to act intelligently
- lacks creativity, mistakes, ESP (extrasensory perception)
strong and weak AI
weak: act intelligently

strong: actually think like a human
basic idea behind the General Problem Solver: means-ends analysis
goal-oriented: every step it takes is the best step to get it closer to its goal ("hill-climbing procedure").
- program is based on human problem-solving strategies (heuristics)
How early AI systems worked (programs, symbols, operations)
designed programs that used symbolic representation. programs could recognize a symbol, and then perform an operation
problems for early AI
- uncertainty
- symbol grounding problem
- to think like a human, there would be an infinite amount of data to work with from the world--too many algorithms to write!
how psych fits into cogsci
through psych, we can better understand people's representations. a lab setting lets us isolate variables and measure properties of the mind
how should we investigate cognitive architectures?
we should use the idea of symbolic representation to understand the architecture of the mind
modularity hypothesis
different modules for individual processes (DOMAIN SPECIFIC) such as vision, auditory, etc.
modules are INFORMATIONALLY ENCAPSULATED, each are opaque to one another, no communication between modules

EX: if we see a visual illusion, we'll still see the illusion even if we know it's an illusion
evidence in favor of a "language of thought"
- mind uses propositions
- we can recall semantic information, but not syntactic
semantic networks and spreading activation
- many propositions linked to one another. the more related one proposition is to another thought, the fewer arrows it must follow to reach that proposition node
- any node can be "activated" and this activation spreads. this can explain "train of thought"
evidence for semantic networks (response times)
studies have shown how related concepts can be used to predict response times for making decisions
scripts and schemas
schema: list of properties in a semantic network that surround a certain concept, work on a general level

script: representations of a sequence of actions associated with a type of activity (have to do with roles, goals of a certain situation)
challenges for view of the mind as a symbolic system
similar to early AI: we can have rules (propositions, scripts, schemas), but our mind is still subject to:
- symbol grounding
- uncertainty
- also unclear how this information got here in the first place
mental rotation
an experiment to test mental imagery: degree of rotation was proportional to response time in identifying if object was rotated correctly
key claim in the mental imagery debate
claim: mental imagery has its own system of representations/operations
positions in the mental imagery debate
Kosslyn: experiments show that response time is linearly related to distance (scanning maps in this case)
--> we have a system of representations specifically for imagery

Pylyshyn: we decompose images into mathematical propositions
how neuroscience fits into cogsci
explores the hardware of the human mind, the mechanisms that do the imformation processing
different views of the brain (classical. connectionist)
phrenology: different parts of the brain are responsible for different states of mind and concepts(anger, sadness, hope)

decartes: all perceptions went to the pineal gland, but this turned out to be false

connectionist: ANN
corpus callosum
connects brain hemispheres
diencephalon
thalamus, pituitary, pineal
midbrain
primitive sensory/motor
hindbrain (medulla, pons, cerebellum, spinal cord)
balance, posture
occipital
vision
temporal
auditory, visual info (FFA)
parietal
computation, attention, integration of sensory information
frontal
motor controls, planning, speech, higher-level cognition
Cerebral asymmetries and how the brain connects up to the world (contralateral links)
right-handed person has more language processing done in left side of brain than left-handed person. the brain has contralateral links: right hemisphere receives left eye visual info, control left side of body, vice versa
neuron parts
nucleus: houses the working parts of the cell
dendrite: carries signals
axon hillock: where signal begins
axon terminal: where signal ends
how a neuron works (change of charge)
neuron has a charge of -70 mV, filled with negatively charged ions. chemical signals from other cells modify this charge, and then there's a chemical reaction between neurons (the signal is passed on)
action potential releases neurotransmitters along synapse which connects axon terminal of one neuron to the dendrite of another neuron
learning in neurons (wire together, fire together)
if an action potential is repeatedly released, it will will begin to release a greater potential. so if we do something repeatedly, the associated neurons will fire more
why do we study neural representation?
understanding the representations in our brain can help us in understanding those in our mind
- it would be helpful to understand how neural representations are ACQUIRED
methods for studying neural representation
use animals: PUT DYE IN A CELL, LESIONING
put dye into a cell that is firing to see where its signal goes
lesioning (killing) part of the brain and looking at the consequences
- in humans: brain-damaged patients
neural representation in visual cortex
simple cells: they see lines
complex cells: see angled lined
hypercomplex: see corners, terminated lines
topographical maps in the brain
parts of the cortex respond to info that's being trasmitted in the same dimension (neighboring)
scomata
where optic nerve meets the retina, forms a blindspot, but our brain "fills in" the visual info
blindsight
the links between the eyes and the brain are damaged, but the eyes work fine. some information from eyes goes to other parts of the brain, so blindsighted people can "see" a little, through subconscious
EEG (electroencephalography) imaging
brainwaves
fMRI (functional magnetic resonance imaging)
bloodflow
PET (positron emission tomography)
insert radioactive dye and see where it goes
fusiform face area and prosopagnosia
evidence of modularity hypothesis?
people with damaged to the FFA has trouble recognizing faces (prosopagnosia), even their own
neural plasticity
using certain parts of the body may increase representativeness in brain, decreases with time
localist and distributed nodes in neural representations
localist: "grandmother cell" -- fires only for your grandmother

distributed: we can predict movement averaging the directions in which it moves, and weight these measurements by the amount certain neurons fire