• Shuffle
    Toggle On
    Toggle Off
  • Alphabetize
    Toggle On
    Toggle Off
  • Front First
    Toggle On
    Toggle Off
  • Both Sides
    Toggle On
    Toggle Off
  • Read
    Toggle On
    Toggle Off
Reading...
Front

Card Range To Study

through

image

Play button

image

Play button

image

Progress

1/55

Click to flip

Use LEFT and RIGHT arrow keys to navigate between flashcards;

Use UP and DOWN arrow keys to flip the card;

H to show hint;

A reads text to speech;

55 Cards in this Set

  • Front
  • Back
  • 3rd side (hint)
• Parallel constraint satisfaction
Process by which a problem is solved by using a parallel algorithym to find the best assignment of values to interconnected aspects of the problem. Example: Necker cube
Connectionist models: sparse
Information is represented by a few neurons within a largely populated neuronal network. Visual cortex processing representation.
• Neuroscience basics
o Neuron, synapses, action potential...
• Connectionist models
o Distributed, sparse, local representations
Two types of connectionist networks
ANNs (Artificial neural networks) or PDS (Parallel Distributed System)
Connectionist models: local representations
grandmother neuron. One particular neuron that fires to achieve a result.
Connectionst models: distributed
multiple neurons working simultaneously and in conjunction to achieve a result. Direction of movement as example.
• Connectionism
style of modeling based upon networks of interconnected simple processing units
• Connectionism o History
Ramon y Cajal- showed that neurons were physically separated from other neurons. In 1943, McCulloch & Pitts published "A Logical Calculus of Ideas Immanent in Nervous Activity“ neurons with a binary threshold activation function were analogous to first order logic sentences. Perceptrons.
• Connectionism o Single-layer perceptron
cannot XOR function
• Connectionism o Basic elements
1. A set of processing units. 2. A state of activation 3. An output function 4. Connectivity patternamong units 5. Activation rule 6. Learning rule 7.Environment
• Connectionism o Activation functions
step funciton; on or off. Segmoidal is gradual
• Connectionism o Linear separability
ANNs learn decision boundaries
• Connectionism o Multi-layer perceptron
has hidden layer- resolves XOR function issue- no decision boundary
• Connectionism o ANN topologies
3 layer, feed forward and recurrent
• Connectionism Learning paradigms- supervised
Compute desired outputs from given inputs–Criterion: Similarity between desired and computed outputs –Applications: Classification, prediction- reads handwriting for post office (predicts what a letter is based on trained net experience.) parallel to concept learning in humans
• Connectionism o Learning paradigms- reinforcement
–Select actions to maximize rewards • Requires a task and a policy–Criterion: Reward accumulated over time –Applications: Mobile robots, making plans
• Connectionism Learning paradigms- unsupervised
–Find structure in given inputs–Criterion: Ability to predict where new inputs are likely to be –Applications: Clustering, outlier detection. Same as supervised but without given outputs to train with.
• Perceptron learning rule
• Backpropagation
Supervised learning algorithm – “Backwards propagation of errors”: the errors propagate backwards from the output nodes to the inner nodes.• Used in feed-forward networks • Requires differentiable transfer function • Algorithm: gradient descent – Finding the weights that minimize the error
o Gradient descent
Present a training sample to the neural network• Compare the network's output to the desired output. Calculate the error in each output neuron. • For each neuron, calculate what the output should have been, and a scaling factor, how much lower or higher the output must be adjusted to match the desired output. This is the local error. • Adjust the weights of each neuron to lower the local error. • Assign "blame" for the local error to neurons at the previous level, giving greater responsibility to neurons connected by stronger weights. • Repeat the steps above on the neurons at the previous level, using each one's "blame" as its error. how to get to global minimum in backpropogation in errors vs constraints in supervised networks.
• Hebbian learning
“When an axon of cell A is near enough to excite a cell B and repeatedly or persistently takes part in firing it, some growth process or metabolic change takes place in one or both cells such that A's efficiency, as one of the cells firing B, is increased.” –Unsupervised – Synaptic strength (weight) is increased if both source and target neurons are active at the same time –Otherwise, weight is decreased
• Hopfield network
• Performs like an associative memory–content addressable memory • All connections are symmetric (wij = wji)• Binary threshold units • Network dynamics convergence is guaranteed –i.e. local mini- mark will converge to a "remembered" state if it is given only part of the state. (handwriting) Limitations –If patterns too close network can’t reconstruct them –Net structure is rigid: only 1 layer, and there has to be the same number of neurons as inputs
• Graceful degradation
Aka fault-tolerance • Property that enables a system to continue operating properly in the event of the failure of some of its components. • Each unit participates in the storage of many patterns, and each pattern involves many units The loss of a few components will degrade but not loosethe information
• Neurological plausibility
"Natural neural networks –Billions of neurons, trillions of connections –Each neuron has dozens of neurotransmitters • Brain as electro-chemical machine–Pre-and post synaptic plasticity –Single neurons do not have local representations • e.g. conceptual/propositional interpretations –Except for grandmother neuron!–Synapses are one-way (vs. symmetric links in ANNs) –Connections with other neurons are either excitatoryor inhibitory, but not a mixture –Networks are considerably larger than current ANNs"
None
• Artificial Intelligence
Artificial Intelligence:“making a machine behave in ways that would be called intelligent if a human were so behaving”. (John McCarthy 1955). • Robot. Slavic word for “worker”. Used for the first time in a theatre play by Kapex, 1921. – Robot Inst. of America (1985):A programmable multifunction manipulator designed to move material, parts or specialized devices through variable programmed motions for the performance of a variety of tasks. • GOFAI: “Good and Old Fashioned AI” – Symbolic reasoning ignore connections to the world• Robotics was a small branch of AI – Mostly assembly– Few mobile (Shakey, CART)
Turing test
Paper: Computing machinery and intelligence (Alan Turing, 1950) –test of a machine’s capability to perform human-like conversation • It goes like this: –A human judge engages in a natural language conversation with two other parties, one a human and the other a machine. –If the judge cannot reliably tell which is which, then the machine is said to pass the test. –It is assumed that both the human and the machine try to appear human.
• Classic AI
mind needs real world interaction: Examples of problems with classic AI • Computer Vision –Successful in constrained environments • e.g. factories with constant lighting conditions, geometry, etc –In the real world these conditions never hold • e.g. objects partially occluded, distance of objects from the eyes changes, lighting conditions, etc • Locomotion, object manipulation... –Humans and animals perform it naturally and elegantly!
• Embodiment
“Of all of these fields, the learning of languageswould be the most impressive, since it is the most human of these activities. This field, however, seems to depend rather too much on the sense organs and locomotionto be feasible.” embodied metaphor- affection as warmth. embodied robotics, feldman event structure.
• Applications
–Classical approach to AI: Many applications (clever algorithms, expert systems, etc) –Embodied AI: more limited (educationaland entertainmentareas)
• Simulations vs. real robots
simulations cannot predict all variables and threfore cannot accurately completely represent the real world.
• Classic paradigm (horizontal decomposition)
sense-think-act “A chain is only as strong as its weakest link”•Cognitive process is not dependentonthe hardware (i.e. actuators)•Decisions are mediated by knowledge•Knowledge should be explicitly present
o Sense-think-act, examples
Shakey, assembly robots
• Brooks argument (Intelligence without representation)
Brooks argued that human intelligence is too complexand poorly understood –How do we know what are the right subpiecesof the problem? • Approach: Creating AI by –Building up in complexity, having complete systems at each step –Test the systems in the real world (i.e. with sensing and acting) • Conclusion –It’s better to use the world as its own model• Hypothesis –Representation is the wrong unit of abstraction
• Behavior-based paradigm
Ghengis-Situatedness refers to the robot’s ability to sense its current surroundingsand avoid the use of abstract representations. -Embodimentinsists that robots be physical creatures and thus experience the world directly rather than through simulation. “Simulations are doomed to succeed!”(R.A. Brooks) insect like robot that senses environment and adapts without programming.
• Subsumption architecture o Layers
Layers of behaviors –SubsumptionArchitecture is built in layers –Each layer gives the system a set of pre-wired behaviors –Higher levels build (subsume) upon lower levels to create more complex behaviors –All the layers operate asynchronously –The behavior of the system as a whole is the result of many interacting simple behaviors Attila (MIT, 1989)
• Subsumption architecture o Augmented finite state machines
• Examples in robot implementations: Herbert, Genghis (Attila)
Herbert- collects soda cans, Ghengis- insect like, follows humans with heat sensors, walks,
• Discussion o Who has the representations?
• Discussion o What subsumption architecture is not
It is not connectionism/ANNs – Connectionists expect representations to arise from networks• Representations are not necessary! Appear on the observer’s eye/mind – No biological significance between ASFMs& neurons (network nodes)• It is not production rules – Analogy with IF-THEN rules... • ...but layers run in parallel– There is no matching with rules in a database • It is not a blackboard – Analogy of SA being a blackboard architecture• Metaphor for group of experts gathering around a blackboard to collaboratively solve a complex problem – But processes are already “hardwired”to the correct place (i.e. no flexibility on how to gather knowledge)• It is not German (Heidegger) philosophy– Behavior-based approach has similarities– But SA is based on engineering principles
• Discussion o Limits to growth
How many layers can be built in the subsumptionarchitecture before the interactions between layers become too complex to continue? –Not many (<10), scalability problem!• How complex can the behaviors be that are developed without the aid of central representations? –From Herbert (soda can collector) to Cog• Can higher-level functions such as learning occur in these fixed topology networks of simple finite state machines? –Examples from nature on “learning by instinct”(pre-wired) • Honey bees: distinguish among flowers, routes from/to hive, etc• Butterflies: total amount of information learned is constant –Current technology (~1987) is (was) not flexible enough
• Collective intelligence o Swarm intelligence: Stigmergy
Stigmergy: method of indirect communication in a self-organizing emergent system where its individual parts communicate with one another by modifying their local environment. They drop pheromones. Termite hills
• Collective intelligence summaryl
–No direct communication between individuals –Information communicated through the state or changes in the local environment –Individuals decides action based on a simple algorithm –Critical number of individuals needed for intelligent behavior to arise
• Body and World: different views from Psychology
Gibson. We perceive in order to operate on the environment: Perception is not about inferences, it is “designed”for action. • Information is directly conveyed to the brain without computations on representations –How? Affordances • Gibson called the perceivable possibilities for action affordances • Affordanceis an action that individuals can potentially perform in their environment. –e.g. surfaces for walking, handles for pulling, space for navigation, tools for manipulating, etc. • Main idea: Our whole evolution has been geared toward perceiving useful possibilities for action.
• Body and World: different views from Psychology, Philosophy, Linguistics...
–Being-in-the-world argument • “We function in the world because we are part of it”(Heidegger,1962) • i.e. there is no division between the subject and the world –Dreyfus (1991) pushed this further • CRUM is hopeless because our intelligence is inherently non-representational –Smith (1991) proposed “embedded computing” • Emphasis on the interaction with the world rather than internal processing
• Body and World: different views from Psychology, Philosophy, Linguistics...
"• Situated Action –CRUM neglected the role that situations and contextplay in human problem solving & learning –e.g. Antonio Torralba’scontext challenge • How far can you go without running an object detector?
• Body and World: different views from Psychology, Philosophy, Linguistics...
• Situated Action (Schuman, Lave & Wenger) –Problem solving in realistic contexts depends on direct interaction with the world and other people. • e.g. Using a complex machine like a computer by learning how to interact with it, not by having an abstract representation of it –Main point: CRUM is incapable of appreciating the contextual ways in which people deal with the world
o Affordances, intentionality, situated action... all what is in slides 16-26
• Thagard’s responses
• Thagard’sresponses –Denial (of the body and the world) • Only for CRUM fanatics...–Expand CRUM • Pointless? e.g. house on top of the hill – on (house, hill) does not capture all the knowledge given by the perceptual experience –Supplement CRUM • Combination of nonrepresentational operations of brain and body with computational procedures that seem crucial for high-level cognition–Abandon CRUM • Thagard’sway out... dynamics
• Computer models, strong AI argument, functionalism (Chomsky), brain and language (Feldman)
functionalism- mental states exist only for causal reasons- a type of materialism. Chomsky brougth down behaviorism with universal grammar.
• Framing, neuron binding, metaphors and the brain (Lakoff)
• Intentionality, Chinese room argument, computers don’t have semantics... (Searle)
Perceptron history
Initial excitation: Rosenblatt (1957) Simplest kind of feedforwardneural network: a linear classifier –Strong interest in modeling how neural networks might contribute to thought • Dark winter: Minsky& Papert(1969) –Single-layer perceptroncan not learn XOR function–AI & Psych shifted to rule and concept-based systems • 1980s: ANN renaissance –Connectionism is born (Hinton & Anderson (1981); Rumelhart& McClelland (1986)) • Multi-layer perceptron
monoism (contrast to dualism)
has behaviorism, materialism and idealism
Mars 3 levels of analysis
computational (goal), algorithymic (strategy), implementation