• Shuffle
    Toggle On
    Toggle Off
  • Alphabetize
    Toggle On
    Toggle Off
  • Front First
    Toggle On
    Toggle Off
  • Both Sides
    Toggle On
    Toggle Off
  • Read
    Toggle On
    Toggle Off
Reading...
Front

Card Range To Study

through

image

Play button

image

Play button

image

Progress

1/36

Click to flip

Use LEFT and RIGHT arrow keys to navigate between flashcards;

Use UP and DOWN arrow keys to flip the card;

H to show hint;

A reads text to speech;

36 Cards in this Set

  • Front
  • Back
Delay conditioning
Conditioned stimulus presented before the unconditioned stimulus (forward pairing); most effective form of classical conditioning because it has predictive value and contiguity
Trace conditioning
Conditioned stimulus is presented long before the unconditioned stimulus (forward pairing); not as effective since it relies on the subject's trace memory of the conditioned stimulus
Most important factors in classical conditioning
1) Contiguity: how close the CS and US are temporally
2) Frequency of the pairing: increased pairings leads to a stronger connection between the CS and US
3) Predictive value of the CS
Backward conditioning
Unconditioned stimulus is presented before the conditioned stimulus; not effective because there is no predictive value
Autoshaping
In this experiment, the US will come irrespective of the animal's behavior, but when the US is paired with some signal (i.e., button that lights up), the animal will respond to this signal as if it is the US (i.e., pigeons will peck at a light as if it is food because it seems to signal food)
Conditioned emotional response
1) Train a rat to press a lever for food on a variable interval schedule so that the rat will continuously press the lever because it is not sure when the next reinforcer (food) will come
2) Pair a US (usually aversive, like a shock) with a CS (a tone)
3) Repeat the first step; the rat will press the lever with the same rate, but if the tone is played, the rat will slow down its lever pressing
4) The rat's fear can be measured by observing how much it slows down its lever pressing
Stimulus substitution theory
Posited by Pavlov; through repeated pairings of the CS and US, the CS becomes a substitute for the US, so that the UR becomes the CR, and the two are comparable; however, the CR and UR are rarely identical
Acquisition phase (classical conditioning)
Part of an experiment in which the subject first experiences a series of CS+US pairings and during which the CR gradually appears and increases in strength
Asymptote (classical conditioning)
Maximum level of responding that is gradually approached as the learning experiment continues; stronger CS+US lead to higher asymptotes and faster conditioning
Extinction (classical conditioning)
Presenting the CS without the US so that the CR is diminished over time
Spontaneous recovery
Spontaneous reappearance of a CR after an extinction trial even when no CR is recorded at the end of the extinction trial
Inhibition theory of spontaneous recovery
After extinction is complete, the animal has two counteracting associations: the excitatory CS+US association formed during acquisition and the inhibitory CS-US association developed during extinction; inhibitory associations are more fragile and can be weakened with passage of time, explaining the spontaneous recovery
Conditioned inhibition
Any CS that reduces the size of the CR or that reduces the size of the CR from what it would otherwise be.
For example, if a dog is trained to salivate at the sound of a buzzer (+food), a light flash can be introduced, which signals no food. At first, the dog salivates when the buzzer and light are presented together, but eventually, it will only salivate when the buzzer is sounded (learns to discriminate).
Generalization
Transfer of the effects of conditioning to a similar stimuli; put a pigeon on a VI 5 minute schedule when a green light is shown and measure the number of pecks at different wavelengths to compile an excitatory gradient; peak shift can occur in which responses shift away from the negative stimulus (wavelengths that are not green)
Discrimination
Subjects learn to respond to one stimulus but not a similar stimulus
Rescorla's CER study
Rats were divided into three groups: 1) standard pairing (each CS was followed by a US), 2) partial pairing (only some CSs were followed by a US), and 3) random control (some USs without CSs), and CER was performed with all groups. It was found that suppression of lever-pressing was the highest in the rats that underwent standard pairing because this type of conditioning had strong contingency (predictive value) and contiguity.
Rescorla-Wagner model equation
A mathematical theory of classical conditioning that states that, on each trial, the amount of excitatory or inhibitory conditioning depends on the associative strengths of all the conditioned stimuli that are present and on the intensity of the unconditioned stimulus
DeltaVn = k(lambda - Vn-1)
Lambda = asymptote of conditioning, V = measurement of the CR, n = trial number, k = CS salience (0-1, 1 is most salient)
CS pre-exposure effect
Classical conditioning will proceed more slowly if a CS is repeatedly presented by itself before it is paired with a US because the subject will ignore the CS since it is not predictive of anything, and it will take a longer time to form the CS+US association; Rescorla-Wagner model does not predict this because it would predict no learning occurs; however, it looks like some does occur
Blocking effect
One group of rats (blocking group) underwent a CER experiment where light flashes were paired with shocks. In the second phase, the blocking group underwear CER with light flashes + tone + shock. A control group underwent the same procedure. In the testing phase, the tone was presented with no shock to measure the strength of the CS. The blocking group did not react to the tone, but the control group did. This indicates that prior conditioning with the light blocked later conditioning since it added no new information about the US.
Basic concepts of the Rescorla-Wagner model
Learning will only occur if the subject is surprised.
Strength of US > strength of subject's expectations = excitatory conditioning (acquisition)
Strength of US < strength of subject's expectations = inhibitory conditioning (extinction)
Strength of = strength of subject's expectations = no conditioning, asymptote (blocking)
The larger the discrepancy between the strength of the expectation and the strength of the US, the greater the conditioning that occurs while more salient CSs will condition faster than less salient ones.
Thorndike's law of effect
Thorndike placed animals inside a cage and measured their escape latency. He found that the behaviors that did not decrease escape latency were decreased while behaviors that decreased escape latency were increased. The greater the satisfaction or discomfort experience by an animal, the more likely or less likely they are to repeat that behavior. Behavior is governed by its consequences.
Shaping
Method of successive approximations; makes use of a conditioned reinforcer, which is a previously neutral stimulus that has been repeatedly paired with a primary reinforcer; behaviors that approximate the desired behaviors are rewarded.
Continuous reinforcement
Schedule of reinforcement in which every occurrence of the operant response is followed by a reinforcer; rare in the real world
Discrimination hypothesis of schedules of reinforcement
In order for a subject's behavior to change once extinction begins, the subject must be able to discriminate the change in reinforcement contingencies; this is difficult to do in schedules of reinforcement other than continuous reinforcement because there is always a time when the reinforcer is not present, so they are resistant to extinction.
Generalization decrement hypothesis of schedules of reinforcement
An animal on a continuous schedule of reinforcement will show decreased responding when the test stimuli becomes less and less similar to the training stimulus (when the reinforcer is no longer presented every time they make an operant response) since they never learned to continuously respond. Animals on other schedules of reinforcement have learned to continuously respond and see no decrement in the testing stimuli; therefore, these schedules are resistant to extinction.
Differential reinforcement of low rates schedule
A response is reinforced if and only if a certain amount of time has elapsed since the previous response; the animal cannot perform an operant response before this time
Differential reinforcement of high rates schedule
A certain number of responses must occur within a fixed amount of time
Concurrent schedule
Subject is presented with two or more response alternatives, each associated with its own reinforcement schedule
Chained schedule
Subject must complete the requirement for two or more schedules in a fixed sequence; each schedule is signalled by a different stimulus; strength of responding decreases as a schedule is further and further removed from the primary reinforcer
Multiple schedule
Subject is presented with two or more different schedules, one at a time, and each schedule is signalled by a different stimulus.
Fixed ratio schedule
Reinforcer is delivered after every n responses; animal will have a postreinforcement pause (show no responses), which will eventually give way to an abrupt continuation of responding at a constant, rapid rate until the next reinforcer is delivered (i.e. piecework method in factories)
Variable ratio schedule
Exact number of required responses is not constant from reinforcer to reinforcer; postreinforcement pauses are brief because after each reinforcer there is a possibility that another reinforcer will be delivered after a few responses (i.e., gambling)
Fixed interval schedule
The first response after a fixed amount of time has elapsed is reinforced; subjects tend to make many more responses per reinforcer than is required; postreinforcement pause occurs, but the subject will begin responding slowly and respond more rapidly the closer it gets to the reinforcer time (i.e., waiting for a bus to show up, and the operant response = staring down the street, looking for the bus)
Variable interval schedule
The amount of time that must pass before a reinforcer appears varies unpredictably; long pauses after a reinforcer are not advantageous because a reinforcer could appear at any moment; tend to sustain a steady response rate (i.e., checking the mail)
Higher order conditioning
Does a second CS help the animal predict the US originally paired with the first CS? Pair CS1+US then pair CS1+CS2. This works but a third CS does not work as well.
Habituation
Decrease in the strength of a response after repeated presentation of a stimulus that elicits a response.