Use LEFT and RIGHT arrow keys to navigate between flashcards;
Use UP and DOWN arrow keys to flip the card;
H to show hint;
A reads text to speech;
60 Cards in this Set
- Front
- Back
Classical conditioning forms associations between stimuli, Operant conditioning forms an association between |
the behavior and the resulting events |
|
Thorndike's law of effect |
Rewarded behavior is likely to occur again |
|
Operant Chamber or Skinner Box |
contains bar or key that an animals manipulates to obtain a reinforcer like food or water |
|
What were Skinner's contributions to OC? |
Contingency effects and reinforcers |
|
Contingency |
Specified relationship between behavior and reinforcer |
|
The _____ determines the contingencies (Skinner?) |
environment |
|
Reinforcer |
An event (or termination of an event) that increase probability of a behavior |
|
Reinforcing stimulus |
An appetitive stimulus that follows a particular behavior and makes the behavior more likely to occur |
|
Punishing stimlus |
An aversive stimulus that follows a particular stimulus and makes the stimulus less likely to occur |
|
How do we measure operant conditioning? |
Evidence of learning is the frequency and consistency of responding |
|
Primary Reinforcer |
An activity whose reinforcing properties are innate (natural reinforcers, i.e food, water, sex, safety, comfort...) |
|
Secondary Reinforcer |
An event that has developed its reinforcing properties through its association with primary reinforcers (classically conditioned to occur with or provide access to primary reinforcers) |
|
How do secondary reinforcers gain control over behavior? |
Through their associations with the primary reinforcers |
|
Shaping |
Is the operant conditioning procedure in which reinforcers guide behavior towards the desired target behavior through successive approximations |
|
Immediate Reinforcer |
A reinforcer that occurs instantly after a behavior. A rat gets a food pellet for a bar press. |
|
Delayed Reinforcer |
A reinforcer that is delayed in time for a certain behavior. e.g: A paycheck that comes at the end of the week. |
|
Continuous Reinforcement |
Reinforces the desired response each time it occurs. |
|
Partial Reinforcement. Effects: |
Reinforces a response only part of the time. Results in slower acquisition to begin but shows greater resistance to extinction later on. |
|
Fixed Ratio Schedule: Produces what kind of response rate?
|
A specific number of responses is needed to produce reinforcement. Produces a consistent response rate.
|
|
Talk about an FR schedule with the number indicating: Continuous reinforcement is an? |
How many responses are needed before a reinforcer is given. FR1 |
|
Features of a Fixed Ratio schedule? |
Post-Reinforcement Pause |
|
Post-Reinforcement Pause |
A pause in behavior following reinforcement on a ratio schedule, which is followed by resumption of responding at the intensity characteristic of that ratio schedule |
|
What will make a Post-Reinforcement pause morel likely to occur? |
The higher the number of responses needed to obtain reinforcement |
|
For a P-R pause, the higher the ratio schedule, |
the longer the pause |
|
For a P-R pause, the greater the satiation, |
the longer the pause |
|
Example of an FR schedule in real life: |
Gardener who is paid after each yard done. |
|
Variable Ratio Schedule |
An average number of responses produces reinforcement, but the actual number of responses required to produce reinforcement varies over the course of training |
|
Features of a VR schedule: |
Produces consistent rate of responding Very hard to extinguish Post-reinforcement pauses occur only occasionally |
|
Between FR and VR schedules, which has the higher rate of responding? |
VR, but confusing data |
|
Real life example of a VR schedule: |
Fishing |
|
Fixed Interval schedule |
Reinforcement is available only after a specified period of time and the first response done after the interval is reinforced |
|
A pattern of behavior characteristic of Fixed Interval schedules, where responding... |
stops after reinforcement and then slowly increases as the time approaches when reinforcement will be available. |
|
Scallop effect occurs under which schedule? |
FI |
|
The length of the pause on an FI schedule is affected by: |
Experience: the ability to withhold the response until close to the end of the interval increases with experience. The pause is longer with FI schedules |
|
Real life example of FI schedule: |
Kids who wait until friday to do chores, knowing thats when their mother gets paid and thus they can get paid |
|
Variable Interval schedules: |
An average interval of time between available reinforcers, but the interval varies from one reinforcement to the next contingency |
|
Differential Reinforcement of high rates of responding schedule |
Schedule of reinforcement in which a specified high number of responses must occur within a specified time in order for reinforcement to occur. |
|
For DRH schedules there is what kind of limit to responding? |
Time limit |
|
How effective are DRH schedules? |
Extremely |
|
Differential Reinforcement of Low Responding Schedule |
Schedule of reinforcement in which an interval of time must elapse before a response delivers reinforcement |
|
Real life example of DRL schedule: Does is effectively control behavior? |
Kid being told if he can be quiet for 20 minutes he can have ice cream. Effectively controls behavior |
|
Differential Reinforcement of Other Behaviors Schedule |
Schedule of reinforcement in which the absence of a specific response within a specified time leads to reinforcement Widely used in behavioral modification
|
|
Real life example of a DRO: |
Giving a student who obnoxiously interrupts class a piece of candy if they can stay quiet for 5 minutes, then progressively upping it to 10,15,etc... |
|
What is the importance of contiguity in operant conditioning>? |
Lack of delay is important. Reinforcers can lead to the acquisition of an operant response if it immediately follows |
|
What could bridge the interval and reduce the impact of delay? |
Presence of a secondary reinforcer |
|
What would make a task be learned faster? |
Magnitude of reward. Differences in performance may reflect motivational differences |
|
Depression effect (also called?) |
Also called negative contrast. Effect in which a shift from high to low reward magnitude produces a lower level of response than if the reward magnitude had always been low |
|
Elation Effect (also called?) |
Positive contrast. Effect in which a shift from low to high reward magnitude produces a higher level of response. |
|
How long do contrast effects last? |
Short amount of time |
|
What seems to play a role in the negative contrast effect? |
Frustration |
|
What may explain the positive contrast effect? |
Emotional response of elation |
|
Premack's Probability-Differential Theory |
Any activity or behavior that has a higher probability can serve as a reinforcer for an activity or behavior with a lower probability |
|
Premack's principle is effective in producing... Example: |
behavior changes Ex: Eating your vegetables in order to eat dessert |
|
Response Deprivation Theory |
When you deprive an organism of its usual responsiveness for something then the organism will want to get it back up to its usual levels Anything that you deprive an organism of becomes a reinforcer |
|
What factors contribute to resistance to extinction? |
1. Reward magnitude 2. Schedule of reinforcement |
|
Influence of reward magnitude on resistance is dependent on |
the amount of acquisition training |
|
What type of reward will produce slower extinction? |
Small reward during acquisition as the frustration of not obtaining the reward will be smaller |
|
Extinction is slower following ______ rather than _____ reinforcement |
Partial; Continuous |
|
Partial Reinforcement Effect |
Greater resistance to extinction of an operant response following intermittent rather than continuous reinforcement |
|
2 theories of they the PRE occurs: |
1. Frustration Theory 2. Sequential Theory: If reward follows nonreward, the animal will associate the memory of the nonrewarded experience with the operant response |