• Shuffle
    Toggle On
    Toggle Off
  • Alphabetize
    Toggle On
    Toggle Off
  • Front First
    Toggle On
    Toggle Off
  • Both Sides
    Toggle On
    Toggle Off
  • Read
    Toggle On
    Toggle Off
Reading...
Front

Card Range To Study

through

image

Play button

image

Play button

image

Progress

1/71

Click to flip

Use LEFT and RIGHT arrow keys to navigate between flashcards;

Use UP and DOWN arrow keys to flip the card;

H to show hint;

A reads text to speech;

71 Cards in this Set

  • Front
  • Back
Learning
The relatively permanent or stable change in behavior as the result of experience.
E.L. Thorndike
Suggested the LAW OF EFFECT, which was the precursor of operant conditioning. The law postulated a cause-and-effect chain of behavior revolving around reinforcement.
Kurt Lewin
Developed the THEORY OF ASSOCIATION, which was a forerunner of behaviorism. Association is grouping things together based on the fact that they occur together in time and space. This idea is abasically waht Ivan Pavlov later proved experimentally.
Ivan Pavlov
Classical conditioning, also known as pavlovian conditioning, involves teaching an organism to respond to a neutral stimullus with a non-so-neutral stimulus.
John B. Watson
Expanded the ideas of Pavlov and founded the school of behaviorism. Watson's idea of learning was that everything could be explained by stimulus-response chains and that conditioning was the key factor in developing these chains. Only objective and observable elements were of importance to organisms and to psychology.
B.F. Skinner
Conducted the first scientific experiments to prove the concepts in Thorndike's Law of Effect and Waton's idea of the causes and effects of behavior. This idea of behavior being influenced primarily by reinforcement is now called operant conditioning. Skinner used rats and a device called the Skinner Box. Proved that animals are influenced by reinforcement.
Classical conditioning
involves pairing a neutral stimulus with a non-so-neutral stimulus; this creates a relationship between the two.
Unconditioned Stimulus (UCS)
The not-so-neutral stimulus. In Pavlov's dog experiments, the UCS is the food. Without conditiniong, the stimulus elicits the response of salivating.
Conditioned Stimulus (CS)
The neutral stimulus that is paried with the UCS. The CS has no naturall occurring response, but it is conditioned through pairings with a UCS. In classical conditioning, a CS (the light) is paired with a UCS (the food), so that the CS along will produce a response.
Unconditioned Response (UCR)
The naturally occurring response to the UCS.
Conditioned Response (CR)
The response that the CS elicits after conditioning. The UCR and the CR are the same (salivating to food or a light, for example).
Simultanous Conditioning
The UCS and CS are presented at the same time.
High-Order Conditioning/Second-Order Conditioning
A conditioning technique in which a previous CS now acts as a UCS.
Forward Conditioning
Pairing of the CS and the UCS in which the CS is presented before the UCS. Two types of forward conditioning are DELAYED CONDITIONING and TRACE CONDITIONING.
Delayed Conditioning
The presentation of the CS begins before that of the UCS and lasts until the UCS is presented.
Trace Conditioning
The CS stimulus is presented and terminated before the UCS is presented.
Operant Conditioning
Also called INSTRUMENTAL CONDITIONING. Aims to influence a response through various reinforcement strategies. In Skinner's experiments, using the SKINNER BOX, the basic idea was that rats repeated behaviors that won them rewrads and gave up behaviors that did not.
Differential Reinforcement of Successive Approximations
Another word for SHAPING (e.g. rewarding rats with pellets for getting progressively closer to the lever in a Skinner Box)
Primary Reinforcement
A natural reinforcement. Something that is reinforcing on its own without the requirement of learning. Food and water are primary reinforcers.
Secondary Reinforcement
A learned reinforcer. Money is a perfect example. They are often learned through society. Other examples are prestige, awards, and a token economy.
Positive Reinforcement
A type of reward or positive event acting as a stimulus that increases that likelihood of a particular response. Some subjects are not motivated by rewards because they don't believe or understand that the rewards will be given.
Negative Reinforcement
It is NOT punishment or the delivery of a negative consequence. Rather, it is reinforcement through the removal of a negative event.
Negative Reinforcement v. Punishment
First, negative reinforcement encourages the subject to behave a certain way, and punishment encourages a subject to stop behaving a certain way. Second, negative reinforcement entails removing a negative event, and punishment entails introducing a negative event. Skinner did not use punishment.
Continuous Reinforcement Schedule
In this schedule, every correct response is met with some form of reinfircement. This type of reinforcement factilites the quickest learning, but also the most fragile learning; as soon as the rewards stop coming, the animal stops performing.
Partial Reinforcement Schedule
In this schedule, not all correct responses are met with reinforcement. This stragety may require a longer learning time, but once learned, these behaviors are more resistant to extinction. There are four distince reinforcement schedules: 1) fixed ratio, 2) variable ratio, 3) fixed interval, 4) variable interval
Fixed ratio schedule
In this partial reinforcement schedule a reinforcement is delivered after a consistent number of responses. Because the ratio is fixed, the behavior is vulnerable to extinction. When the rewards stop coming as scheduled, the animal will discern this and give up on receiving the rewards.
Variable ratio schedule
In this partial reinforcement schedule learning takes the most time to occur, but the learning is least likely to become extinguished. In this reinforcements are delievered after different number of correct responses. The ratio cannot be predicted. Slot machines are the perfect example of this strategy.
Fixed interval schedule
This partial reinforcement schedule does little to motivate an animal's behavior. Rewards come after the passage of a certain time, regardless of any behaviors from the animal. Some argue that salaried employees and tenured professors are beneficiaries of this uninspiring strategy.
Variable interval schedule
In this partial reinforcement schedule rewards are delivered after differing time periods. It is the second most effective strategy in maintaining behavior. The length of time varies, so one never knows when the reinforcement is just around the corner. Waiting for a bus is a good example.
Token economy
An artificial micro-economy is usually found in prisons, rehab centers, or mental hospitals. INdividuals are motivated by secondary reinforcers, tokens in this case. Desirable behaviors are reinforced with tokens, which can be cahsed in for mroe primary reinforcers.
Motivation and Performance
Individuals are at times motivatd by primary or instinctual drives. Other times, they are motivated by secondary or acquired drives, such as money. Still other types of drive, such as an exploratory drive, may exist.
Theories that assert that humans are primarily motivated to maintain physiological or psychological homeostasis
Fritz Heider's BALANCE THEORY, Charles Osgood and Percy Tannenbaum's CONGRUITY THEORY, and Leon Festinger's COGNITIVE DISSONANCE THEORY all agree that what drives people is a desire to be balanced with respsect to their feelings, etc. These theories, along with drive-reduction theories, are called into question by the fact that individuals oftne seek out stimulation, novel experience, or self-destruction.
Clark Hull
Proposed that Perforamance = Drive X Habit. This means that individuals are first motivated by drive, and then act according to old successful habits. They will do what has worked in the past to satisfy drive.
Edward Tolman
Proposed that Performance = Expectation x Value. This is also known as the EXPECTANCY-VALUE THEORY. The idea here is that people are motivated by goals that they think they might actually meet. Another factor is how important the goal is.
Victor Vroom
Applied Tolman's EXPECTANCY-VALUE THEORY to individual behavior in large organizations. Indivdiuals who are lowest on the totem pole do not expect to receive company incentives, so these carrots do little to motivate them.
Henry Murray and David McClelland
studied the possibility that people are motivated by a NEED FOR ACHIEVEMENT (nAch). This may be manifested through a need to pursue success or a need to avoid failure, but either way, the goal is to feel successful.
John Atkinson
suggested a theory of motivate in which people who set realistic goals with intermediate risk sets feel pride with accomplishement, and want to succed more than they fear failure. But because success is so imporant, these people are unlikely to set unrealistic or risky goals or to persist when success is unlikely.
Neil Miller
Proposed the APPROACH-AVOIDANCE CONFLICT, which refers to the state one feels when a certain goal has both pros and cons. Typically, the futher one is from the goal, the more one focuses on the pros, and the closer one is to the goal, the more one focuses on the cons.
Hedonism
The theory that individuals are motivatd solely by what brings the most pleasure and the least pain.
The Premack Principle
The idea that people are motivated to do what they do not want to do by rewarding themselves afterwards with something they like to do.
Donald Hebb
Postulated that a medium amount of AROUSAL is best for performance. Too little or too much arousal could hamper performance of tasks. Specifically, for simple tasks, the optimal level of arousal is toward the high end. For complex asks, the optimal level of arousal is toward the low end, so that the individual is not too anxious to perform well.
Yerkes-Dodson Effect
The following relationship: that good performance on simple tasks requires a high level of arousal, and good performance on complex tasks requires a low level. The optimal level of arousal for any type of task, however, is never at the extremes. On a graph, optimal arousal looks like an INTERVERTED U-CURVE, with lowest performances at the extremes of arousal.
Stimulus Discrimination
Refers to the ability to discriminate between different but similar stimuli. For instance, a doorbell ringing means something different from a phone ringing.
Stimulus Generalization
The opposite of STIMULUS DISCRIMINATION. To generalize is to make the same response to a group of similar stimuli. Though not all fire alarms sound alike, we know that they all require the same response. UNDERGENERALIZATION is the failure to generalize a stimulus.
Response learning
Refers to the form of learning in which one links together chains of stimuli and responses. One learns what to do in response to particular triggers. e.g. leaving a buliding in response to a fire alarm.
Perceptual or Concept learning
refers to learning about something in general rather than learning-specific stimulus-response chains. An individual learns about something, rather than any particular response. TOLMAN'S experiments with animals formining COGNITIVE MAPS of mazes rather than simple escape routes are an example of this.
Aversive Conditioning
Uses negative reinforcement to control behavior. An animal is motivated to perform a certain behavior in order to escape or avoid a negative stimulus.
Avoidance Conditioning
Teaches an animal how to avoid something the animal does not want.
Escape Conditioning
Teaches an animal to perform a desired behavior to get away from a negative stimulus.
Punishment
Promotes extinction of undesirable behavior. After an unwanted behavior is performed, the punishment is presented. This acts as a negative stimulus, which should decrease the likelihood that the earlier behavior will be repeated.
Autonomic conditioning
Refers to evoking responses of the autonomic nervous system through training.
State dependent learning
Refers to the concept that what a person leans in one physiological state is best recalled in that state.
Extinction
The reversal of conditioning. The goal is to encourage an organism to stop doing a particular behavior. This is generally accomplished by repeatedly witholoding reinforcement for a behavior or by disassociating the behavior from a particular cue.
Latent learning
Takes place even without reinforcement. The actual learning is revealed at some other time. (A rat wandering through a maze, but suddenly demonstrating that it knows how to find the goal, when food is placed in the goal).
Incidental learning
Like accidental learning. Unrelated items are grouped together during incidental learning. For example, pets often learn to dislike riding in cars because it means they are going to the vet. Though it is atually the vet the animal fears, pets associate cars with the vet experience. Incidental learning is the opposite of INTENTIONAL LEARNING.
Chaining
The act of linking together a series of behaviors that ultimately result in reinforcement. ONe behavior triggers the next and so on. Learning the alphabet is an example of chaining. 26 letters are required to complete the chain, and each letter stimulates remembering the next.
Habituation
The decreasing responsiveness to a stimulus as a result of increasing familiarity with the stimulus.
Spontanous recovery
The reappearances of an extinguished response, even in the absence of a more prominent stimulus.
Autoshaping
Refers to experiments in which an apparatus allows an animal to control its reinforcement through behaviors, such as bar pressing or key pecking. The animal is, in a sense, shaping its own beahvior.
Social learning theory
Posits that individuals learn through their culture. People learn what are acceptable and unacceptable behaviors through interacting in society.
Modeling
A specific concept within SOCIAL LEARNING. It fers to learning and behaving by imitating others.
Albert Bandura's Bobo doll study
A study of MODELING. Children who watched adults phyically abuse a blowup doll in a playroom proceeded to do the same during their playtime with Bobo doll; children who did not witness the aggression did not behave in this way.
Observational Learning
Simply the act of learning something by watching.
John Garcia
Performed classical conditioning experiments in which it was discovered that animals are programmed through evolution to make certain connections. He stuided "conditioned nausea" with rats and found that invariably nausea was perceived to be connected with food or drink. Garcia was unable to condition a relationship between nausea and a neutral stimulus (like a light).
The Garcia Effect
Named after John Garcia. The extremely strong connection that animals form between nausea and food has been used to explain why humans can become sick only one time from eating a particular food and are never able to eat that food again; the connection is automatic, so it needs little conditioning.
M.E. Olds
Performed experiments in which electrical stimulation of pleasure centers in the brain were used as positive reinforcement. Animals would perform behaviors to receive the stimulation. This was viewed as evidence against the drive-reduction theory.
Hull-Spence Theory of Learning
Hypothesizes that animals learn to respond differently to different stimuli. This is a theory of discrimination learning.
Continuous v. Discrete Motor Tasks
Continous tasks are easier to learn than discrete motor tasks. An example of a continuous task is riding a bicycle. This is one continuous motion that, once started, continues naturally. A discrete task is one that is divided up into different parts that do not facilitate the recall of each other. Setting up a chessboard is a good example.
Positive transfer
It is previous learning that makes it easier to learn another task later.
Negative transfer
Previous learning that makes it more difficult to learn a new task.
Age and learning
Humans are primed to learn between the ages of 3 and 20. From the age of 20 to 50, the ability to learn remains fairly constant. After the age of 50, the ability to learn drops.