• Shuffle
    Toggle On
    Toggle Off
  • Alphabetize
    Toggle On
    Toggle Off
  • Front First
    Toggle On
    Toggle Off
  • Both Sides
    Toggle On
    Toggle Off
  • Read
    Toggle On
    Toggle Off
Reading...
Front

Card Range To Study

through

image

Play button

image

Play button

image

Progress

1/70

Click to flip

Use LEFT and RIGHT arrow keys to navigate between flashcards;

Use UP and DOWN arrow keys to flip the card;

H to show hint;

A reads text to speech;

70 Cards in this Set

  • Front
  • Back

Connectionism

A way of doing artificial intelligence that is based (loosely) on what we know about brains.




(facts about the brain are important in helping us understand thinking)

True or False: Connectionists reject the claim that thought is medium independent.

True

Medium Independent (review)

Computation is not constrained by what is running the computation nor by where is is being run from




example : MS Word is the same program whether it runs on a Dell or a Mac.

Nodes

simple "elements" or "units" that communicate with each other via connections




Analogous to Neurons



Connections

Elements of Connectionist Networks that connect nodes to other nodes.




Capable of carrying out only very simple signals.




Analogous to Synapses

Weights

Determines the strength of a connection.




The weight of a connection between two neurons determines the degree to which one unit influences another.




Can either be excitatory or inhibitory

Layers

Units are normally arranged into layers.There is always an input layer and an output layer. There is usually one or more hidden layers.

Input Layer

Units in the input layer take information from the environment, or other parts of the brain.

Output Layer

Units in the output layer send information to other parts of the brain, or the environment.

Hidden Layer

Units in the hidden layer are isolated from the environment. They are connected only to other parts of the neural network.

Functions of layers

Single units and connections in neural networks have very limited abilities.




Units transmit only simple numerical values along connections.




Each unit takes a group of numbers as input and transforms them into a single output number, which is transmitted to other units.




The transformations are limited to very simple mathematical operations: usually just summations.

Transfer Functions

Each unit in a neural network has a transfer function.




It determines how that unit's value (or activation) is updated.




Usually, the transfer function multiplies each weight coming in to the unit by the activations of the (input) units which the weights project from.

Distributed Representations

Information is distributed across the entire network rather than being located in one specific place or address.




This means representations in neural networks are often thought to be distributed in that groups of units are responsible for individual representations.




This is a "pattern of activation" that exists across several units simultaneously.



Parallel Processing

Computation occurs in parallel across large numbers of units rather than in the serial fashion of traditional computer architectures.




Individual units are not responsible for solving problems. They solve them collectively.

Continuous vs. Digital Processing

They are continuous in that the values of connection weights and activations have several significant decimal places, allowing partial activations. (Analogous to Statistics)




This differs from GOFAI systems, which are digital (all values are either yes/no, on/off, 0/1).

NETtalk

Sejnowski and Rosenberg (1987), “Parallel networks that learn to pronounce English text”




A 3-layer neural network that learns read English text.


It has 7 sets of 29 input units (203 total), with each set representing a letter.

So at any given moment, the input is 7 letters.

It has 80 hidden units.
It has 26 output units, each representing a phoneme.

There are a total of 18,629 weighted connections

Important Things about NETtalk (2)

1. Rather than being explicitly programmed it was taught.




2. It learned a general skill.

Learning Algorithms

An algorithm that will change the weight of connections (neurons) so we will be more likely to get the correct answer (that is, the correct pattern of activations at the output) the next time we encounter a given stimulus.

Nativism and Connectionist Networks

Nativism believes that most of our thinking is innate.




Because of the way Connectionist Networks learn by example, these results show that there is more information and structure in the environment than had been realized previously. Thus abilities need not be innate.

U-Shaped Learning

The natural tendency for children to exhibit a learning pattern graphically representative of good early performance,preceded by poor later performance which eventually improves




ex: baby kicking at a couple months, thne stops, then starts again

Readiness Effects

For significant periods of development, behavior changes very slowly and children are unable to learn certain tasks.




At other points, children display heightened sensitivity to examples and learn very quickly.(analogous to Critical Period)

Rummelhart and McClelland Past-tense Network

A two-layer connectionist network (that had input and output layers, but no hidden layer) to model learning of the past-tense.




The nodes of the input layer were a phonological representation of present tense verbs.




The nodes of the output layer were a phonological (speech sounds) representation of the past tense.




The network behaved correctly when it produced the correct past-tense given the present tense at the input layer.




They trained their network on a selection of 420 verbs with regular and irregular past tense forms.Their network learned to produce the correct past tense for nearly all the verbs (regular and irregular) it was trained on.




It showed “U-shaped” performance on irregular verbs.




Was able to produce both regular and irregular past tense verbs with just one network (contradicts that separate processes are needed)

Poverty of the Stimulus

In the context of language, this is referred to as the lack of information in the sentences children hear for them to learn languages.




In other words, there must be some intrinsic properties because hearing language doesn't provide enough information for learning.

Elman Nets

Simple recurrent networks (Elman Net) that are one of the main developments of “second-wave connectionism”.




Distinguished by the presence of context nodes that provide a form of memory.




Used a simple recurrent network to learn to predict the next word in sentences.

Elman Nets and Grammar

To make such predictions, Elman’s net had not only to be able to learn the categories noun, verb, adverb, etc.It also had to be able to learn the sub-categories edible thing, animate thing, etc.




This is in direct contradiction to “poverty of the stimulus” arguments.




Elman’s networks show that there is enough information in sentences we hear to learn grammatical categories.That is, the stimulus isn’t impoverished at all. So, grammatical abilities need not be innate.



Biological Plausibility and Connectionism

Connectionist networks such as these have been criticized because they are biologically implausible.


Connectionist Networks abstract too much from the biological details.


-CN tend to have a number of nodes that all behave the same way


-The nodes arranged into discrete layers.


-Most processing going in one direction (from input to output).




None of this is true of real brains




This is typically true of "first-wave connectionism, but second and third-wave (like Elman Nets) have managed to improve on biological plausibility





Noise and Connectionist Networks

Addition of variables and differieniated connections that allows for more biological plausibility




Accomplished through multiple mechanisims


-More feedback to individual nodes


-Non-discrete timing


-Non-distinct layers


-Addition of noise to nodes




*noise refers to the tendency of nodes to pass values different than their activation to other nodes. Example: if the summed inputs to a node = 1, a noise-free node would pass 1 to nodes it is connected to; a noisy node would pass 1±something, where the ±something is noise. (i.e. sensory inputs)

GASnets

Neurotransmitter release is simulated.




Freely diffusing gases, especially nitric oxide, in real neuronal networks.




Neurotransmitters work by releasing chemicals essentially (they are electric and chemical)

Genetic Algorithms

The goal in inventing them was to study the phenomenon of adaptation as it actually occurs in nature.




They have been used to develop solutions to many real world engineering problems, as well as to study biological evolution. (The solutions that they produce are similar to those that evolution finds, and very different from those that human engineers come up with)




Optimization technique:


-Set a goal


-Let the program make a random schedule


-Allow them to make “babies” (iterations of random schedules)


-Allow this process until you arrive at a good schedule (result/ achievement of goal)







Compositionality

A necessary aspect for systematicity


This is also simpler than systematicity (quiz question)




-An idea that objects have compositionality when they are made up of atomic parts arranged in particular ways.




*example: the English language is compositional.


"John loves Mary" is made up of 3 parts: John, loves, and Mary.

Systematicity

*Some compositional systems are ALSO systematic.

A system is systematic if it can produce every legal combination of its atoms.

For example, suppose I can form the sentence "John loves Mary". Then to be systematic, I must also be able to form the sentence "Mary loves John.”


The Systematicity Argument Against Connectionism

Human thought is systematic.




Connectionist networks are not systematic.




Therefore, connectionist networks are not good models of thought.




*They aren’t systematic because representations in connectionist networks are not compositional.




*This is because representations in connectionist networks are distributed (is distributed because we could not change "Mary is a jet" to a representation that “Mary is a shark” without changing every activation.)

Connectionist Replies to the Systematicity Argument

Connectionist theorists have responded to this argument in two ways:


-First, they have argued that connectionist networks can be systematic.


-Second, they have argued that thought is not systematic. Only language is systematic, and most thinking (including all non-human thinking) is not done in language.

Vision as Inverse Optics

The traditional view of vision


*Images go in upside down and distorted and the job of vision is to undo the distorted images




Based on the idea that the different objects presented at varying angles, sizes, and distances can project to the exact same retinal representation on the eye resulting in the appearance of a singular object. Then the job of the brain in vision is to disambiguate among these different objects.

Poverty of the Stimulus (Vision)

Illustrated through the Ames Room




It was designed to fool your processes. It exemplifies the idea that the restricted visual stimulus (the peephole) does not provide enough information to know the actual shape of the room and that the 3 people in the room are actually relatively the same size but appear very different.

The Ant on the Beach

The terrain (not the ant) is what determines the complicated route




Supplemental wording: The ant does not “plan” its movements, it simply interacts with the environment

Perception is Direct

The base idea that perception is not a result of manipulation of representations








*For example: 3D perceptions is not constructed using 2D images

Perception is for Action

We see in order to act




We perceive in order to do things

Perception is of Affordances

Affordances are opportunities for behavior (i.e. action).




Basically, affordances are actions that can be performed on objects

Ecological Ontology

Ecological world ≠ physical world


Ecological world is mesoscopic


Ecological world is animal-dependent


Ecological world is Aristotelian




*The terrestrial environment is better described in terms of a medium, substances, and the surfaces that separate them.

The Medium

A set of places I can put my sensory organs (place I can move around in)




Insubstantial, allows transmission of light, mechanical vibrations, chemical diffusion, etc.




A set of points of observation connected by paths of locomotion




Differs for different animals (i.e. human=air, fish=water)




**Typically don't experience the medium

Substances

The stuff in the environment

Surfaces

Interfaces of substances with the medium




Because of the laws of physics, optics, chemistry, light reflected off of surfaces specifies the substances. (Similar for different energies, modalities),


**This specification means that the light carries information about the substances in the environment.

Information

Information is ubiquitous in the environment.


*Light converges at all the points where an observer could be.)




Information is (more or less) complete.


*Light converging at each point of observation has reflected off all the (non-obstructed) surfaces.




Light can contain information about affordances.




Information is both propriospecific and exterospecific.




**Because animal bodies reflect light, information is always about the environment as it is inhabited by a specific animal.




Also loosely based on the theory that differing wavelengths of light (i.e. colors of light) are what carry the information about objects the light is reflected off of.

Information vs. Stimulation

Mere Stimulation: The light arriving at the point of observation has been scattered. It stimulates receptors but carries no information about surfaces.




-Foggy room- to understand vision, can't start with retinal images, have to start with structures in the environment




-Similar with theory above about wavelengths

Information and Movement



* Movement generates lots of information both about the perceiver and about the environment.




*Moving eyes, head, body in exploratory behavior causes changes to what is visible, deforms visible surfaces, etc.




*Locomotion generates information about the direction of movement.




*Optic flow is centrifugal in the direction of movement, and centripetal in the other direction.

Affordances and Direct Perception

According to ecological psychologists: what is perceived is perceived directly.




*Direct perception requires that organisms are surrounded by a world that is inherently significant to them.


*Affordances are directly meaningful to animals--they provide opportunity for particular kinds of behavior.


Meaning: An animal does not first perceive the physical environment, and then deduce opportunities for action based on the properties of that environment.Instead, the animal perceives affordances directly.

Affordances and mutuality

Affordances are only intelligible as parts of animal-environment systems.




A niche (the set of affordances for a particular animal) make sense of the mutuality of animal and environment.




An animal’s abilities imply an ecological niche.


An ecological niche also implies an animal

Higher-order information

-Not single units




-Optical variable tau - how objects accelerate to fill the visual field




-Able to see relationships between objects without the need for processing




"Taller than"

Warren experiment

Warren (1984) shows that information is available in higher-order variables for perceiving affordances.


(*This information is available in the relation between body-scale and aspects of the environment.)


(numbers are unit-less measures of relationships between variables that are expressed in the same units.)


Warren's method was:


1. Subjects are divided into two groups by height. Leg lengths are measured.


2. Subjects are presented with a series of “steps” of varying riser heights.


3. Asked “Can you step onto this?”


Warren's Results:


Numbers for both groups identical.


Warren's point:


People were perceiving the relation between their body size and some aspect of the world.




***This is the first experimental evidence that people perceive affordances.

Affordances and the size-weight illusion

The size-weight illusion: When two spheres (objects) of identical weight but differing size are held, the smaller diameter object is perceived to be heavier.




Moveability is a function of mass, ellipsoid symmetry and ellipsoid volume.




Moveability is an affordance (what we can do)




**The size-weight illusion occurs because subjects are actually basing their judgements on perception of moveability, not weight.




-When presented with some stimulus or affordance that someone can’t understand well, they tend to explain it with affordances they do understand.





Rodney Brooks

His focus on Robotics (late 1980's iRobot) is founded on two claims:


1. Perception and action constitute the greater part of intelligence.


2. “Higher cognition” cannot be understood without first understanding perception and action.




a. Roomba




b. Another important point of note is that this began a swing of cognitive science (Specifically “Robotics”) research that focused explicitly at the simple aspects of people and other animals that would be necessary to understand before would could ever comprehend how humans think.

Cricket Phonotaxis

The understanding of this is based on the function of the suffix ‘-taxis’ which essentially means directed movement




In crickets the particular ‘-taxis’ represented is ‘phono’-taxis which together essentially mean sound directed movement.




-In crickets specifically this related to the location of a male cricket by a female cricket. An interesting point of note is that the biological construction of the cricket ear is a convergence into a single funnel/tube.


-One large implication of this model is that the passive location of the male cricket by the female is done entirely by the ears before the brain is ever involved.


-The female also doesn’t locate the male before moving.


-The location of the male by the female uses the changing sounds caused by the female’s movement to localize the sound of the male.




One large contribution to the field of cognitive science was that of Barbara Webb when she managed to replicate cricket phonotaxis using robotics.

Nontrivial Causal Spread

#1 of Clark's 6 Attributes of embodied cognition




Many cognitive activities make use of the body and environment to do things that would otherwise require complex internal processing and planning.




Doesn't require any neurological control




Example: passive dynamic walking




*Remember video of constructed legs moving down an angled board.



Principle of Ecological assembly

#2 of Clark's 6 Attributes of embodied cognition




Solutions to problems are “soft assembled” from neural, bodily and environmental resources on an as needed basis.




Soft Assembly: Coalition of things that work together temporarily to solve a problem and go back to where they were when you're done

Wide Computationalism

Based on the idea that we use external objects to compliment our intelligence and memory, and also to make computation simpler




Example: Using a piece of paper or a chalkboard when working through math problems

Otto and Inga

Idea from Clark and Chalmers under the spread of wide computationalism that is based on the idea that for an person who has a damaged memory system that uses a notebook (Otto) to remember things, in the same way that a normal person uses the brain mechanisms (Inga) for memory.


The notebook acts as an extension of the impaired individual’s intelligence/memory.

Brain-body-environment systems

Not one or the other, but both animal and environment together




Dependent on real experiments




Example: The blind woman doesn't experience the cane; she experiences the world at the end of the cane.

Kant on known vs. unknown types of causality

Known type of causality


A Machine: Pre-existing parts work together to fulfill the machine's function.Parts are means to an end.




Unknown type of causality


A Living Thing:Parts are created by the living thing they make up.Parts are means and ends.

Autopoiesis

The concept of being ‘self-causing’ or ‘self-propagating’




It literally means ‘self-causing’




**It was initially an attempt to spell out an understanding of living systems that was amenable to mathematical and computational modeling.




Two key concepts components of autopoiesis:


1. Living things are operationally closed.


2. Living things are structurally coupled to their environment.

Structural Coupling

When 2 systems or a system and the environment interact and affect each other until they act congruently where A affects B and B affects A and so on.




Operational closure

A system i operationally closed if and only if every effect of the system is also part of the system


**Core Principle: operationally closed systems are autonomous.

Game of life (Conway)

An example of an autopoietic system that is operationally closed and structurally coupled.




Its purpose was to illustrate how life may have began.




Rules:


If ALIVE and 2 or 3 neighbors alive, ALIVE; else DEAD.If DEAD and 3 neighbors alive, ALIVE; else DEAD.

Cells as autopoietic

Animal cells are a great natural exampleSelf contained, self-propagating system that is a cellular automata




A subset of the operationally closed systems .




Autopoietic systems are self-producing.They are operationally closed because their effects are system internal.Furthermore, the system causes are among these effects.



Autopoiesis and life

Argued by some that life itself is also autopoietic. and self starting/ self propagating and, left alone as it has been, reaches a perceived “plateau” of evolution.

Autopoiesis and the self

Autopoiesis entails the emergence of a self.




A physical autopoietic system, by virtue of its operational closure, gives rise to an individual or self in the form of a living body, or an organism.



Self and Umwelt

With the emergence of self, it is necessary also for the emergence of a correlative domain of interactions proper to that self.






The collection of affordances specific to you.The world surrounding me is called my Umwelt




Example: The environment surrounding me (MY Umwelt) is not the same as a tree's Umwelt.

Sense-making and Cognition

Emergence of self and world = sense-making.




The organism’s world is the sense it makes of the environment. This world is a place of significance and valence, as a result of the global action of the organism.




Sense-making = cognition (perception/action).




Sense-making is tantamount to cognition, in the minimal sense of viable sensorimotor conduct




Living entails sense-making, which equals cognition.

Allen the robot

Allen has a ring of 12 sensors which it uses to determine the distance to the nearest object at each “hour” around its body.


With just these 12 sensors, Allen can wander around most cluttered environments successfully.


The only things that can perturb Allen—that is, influence its behavior—are sufficiently large things that reflect pulses from its sensors.


*According to Varela et al only these things are part of Allen’s world or “cognitive domain”


Allen is closed in an important sense: only very particular stimuli can elicit a reaction from Allen, and the way Allen reacts determines the significance of those stimuli.




-Cognitive domain is very limited.


-Take Allen as a model for all animals.


-All animals are closed as Allen is, structurally coupled to a world composed of very specific stimuli.


-All animals enact or bring forth a world that is determined by the nature of their sensorimotor systems, which in turn determine the significance of the perturbations

Water boatmen and extended cognition

When the water boatman can use plastron, it becomes a part of the world and stops being the plastron.




If you can use the environment, it becomes part of the autopoietic entity