• Shuffle
    Toggle On
    Toggle Off
  • Alphabetize
    Toggle On
    Toggle Off
  • Front First
    Toggle On
    Toggle Off
  • Both Sides
    Toggle On
    Toggle Off
  • Read
    Toggle On
    Toggle Off
Reading...
Front

Card Range To Study

through

image

Play button

image

Play button

image

Progress

1/63

Click to flip

Use LEFT and RIGHT arrow keys to navigate between flashcards;

Use UP and DOWN arrow keys to flip the card;

H to show hint;

A reads text to speech;

63 Cards in this Set

  • Front
  • Back

Identify the three basic stages in any shaping procedure as presented at the beginning of this chapter, and describe them with an example. Either Frank’s case or an example of your own.

a) Specify the final target behavior. Example: Frank’s final target behavior was to jog a quarter mile each day.
b) Identify a response that could be used as a starting point in working toward the final target behavior. Example: Frank decided to put on his runners and walk around the outside of his house once (30 yards).
c) Reinforce the starting behavior, then reinforce closer and closer approximations until eventually the final target behavior occurs. Example: Frank used drinking a beer as a reinforcer. He would have his wife remind him to do his exercise before he could have beer. After the first approximation of a 30 yard walk, occurred on several successive afternoons, he increased it to walking around the house twice. A few days later, the distance was increased to walking around the house four times, then six, and farther and farther until the distance was a quarter mile, then jogging to that distance.


Define shaping

The development of a new operant behavior by the reinforcement of successive approximations of that behavior and the extinction of earlier approximations of that behavior until the new behavior occurs.

What’s another name for shaping?

The method of successive approximations.

Explain how shaping involves successive applications of the principles of positive reinforcement and operant extinction.

The process begins by reinforcing behaviour that occur occasionally which remotely resemble the final target behavior, this similar behavior is positively reinforced. Once it occurs several times successively, it is extinguished and replaced with a behavior which is closer to the final target behavior, which will now be reinforced. When that behavior occurs several times successively, it will be eliminated and replaced with a behavior closer to the final target behavior. This is applied successively until the final target behavior is reached. Once it is, then the final target behavior is reinforced.

Why bother with shaping? Why not just learn about the use of straightforward positive reinforcement to increase a behavior?

In some cases, a desired behavior may never occur, so it is not possible to increase it’s frequency through positive reinforcement alone. Shaping is used to establish a behavior that the individual never performs by establishing a behavior that has a frequency of more than zero and at least resembles the final target behavior.

In terms of the three stages in a shaping procedure, describe how parents might shape their child to say a particular word?

a) Specify the final target behavior: The final target behavior is the child saying and properly pronouncing the desired word. Eg. Daddy
b) Identify a response that could be used as a starting point in working toward the final target behavior: When the child babbles a word that resembles the remotely resembles the desired word. Eg. When the child says da.
c) Reinforce the starting behavior, then reinforce close and closer approximations until eventually the final target behaviour occurs: When the child babbles the starting point word, the parents positively reinforce the behavior with hugs, and attention.
The once the child starts making sounds closer to the final target behavior word eg. Dada, that sound is reinforced, and the more primitive sound (da) is extinguished. This goes on until the final target behavior word (daddy) is said, at which point the final target behavior is what gets reinforced.


List five dimensions of behavior that can be shaped. Give two examples of each

a) Topography: Example 1: Learning to ice skate with longer and longer strides.
Example 2: Learning the proper finger movements to eat with chopsticks.
b) Frequency: Example 1: Increasing the amount of steps Frank walked.
Example 2: Increasing the number of dishes washed in five minutes.
c) Duration: Example 1: Lengthening the time spent studying before a break
Example 2: Gradually adjusting the duration of stirring pancake batter until it achieves the proper consistency.
d) Latency: Example 1: The time between the host of Jeopardy’s verbal stimulus and the contestant pressing their buzzer.
Example 2: The time between when a race pistol is shot and the runner begins to run
e) Intensity (force): Example 1: Learning how to shake hands with firmer grip.
Example 2: Increasing the force of a punch in boxing.


Describe a behavior of yours that was shaped by consequences in the natural environment and state several of the initial approximations.

At my job, I have to write thank you cards. The more thank you cards, the more I get paid. I was able to work up to 4 hours per day, so writing thank you cards for 4 hours was the final target behavior. The first week I spend 1 hour writing thank you cards, and got paid $12. The second week I spent 2 hours and made $24. The third week I spend 3 hours and made $36. Finally, by the fourth week I had reached my final target behavior by writing thank you cards for four hours and made $48.

What is meant by the term final target beahviour in a shaping program? Give an example

A precise statement of the final desired behavior, including all the relevant characteristics of the behavior (eg. Frequency, etc.) and the conditions under which the beahviour is or is not to occur and any necessary guidelines should be stated. Example: Jessica’s final target behavior is to ride her bicycle for 20 minutes a day at 5mph

What is meant by the term starting behavior in a shaping program? Give an example.

A behavior that occurs often enough to be reinforced within the session time and approximate the final target beahviour.
Example: Joe’s final target behavior is to eat an entire cup of ceasar salad every day. His starting behavior is eating all the crutons in the salad.


How do you know you have enough successive approximations or shaping steps of the right size?

There are no specific guidelines. Try imagining what steps you would go through, or ask someone who can perform the final target behavior what steps they went through. Try to stick to your steps, but be flexible if the trainee is moving to quickly or slowly through them.

Why is it necessary to avoid under-reinforcement of any shaping step?

Without sufficient reinforcement, the step won’t become well established. Trying to get to a new step before the previous approximation has been well established can result in losing the previous approximation through extinction without actually achieveing the new approximation.

Why is it necessary to avoid reinforcing too many times at any shaping step?

If one approximation is reinforced so long it becomes extremely strong new approximations are less likely to appear.

Give an example of the unaware-misapplicaiton pitfall in which shaping might be accidentally applied to develop an undesirable beavhour. Describe some of the shaping steps in your example.

John learns how to ride a two wheel bicycle, and receives lots of cheering from his friends (positive reinforcement). After a while, they aren’t impressed by this skill anymore, so they stop cheering, so he learns to ride with only 1 hand on the handles, and they are impressed again and cheer again. He then learns how to ride with no hands and they begin to cheer again, but soon grow bored of this too. So John starts to ride standing up on the seat. This is a dangerous, undesirable behavior.

Give an example of the pitfall in which the failure to apply shaping might have an undesireable result.

Failure-to-apply Pitfall: Jessica, an infant begins babbling, but her mother is not terribly impressed, so she does not reinforce the behavior. The behavior is not positively reinforced, so the child doesn’t move on to the next stage.

Give an example from your own experience of a final target behavior that might best be developed through a procedure other than shaping.

When I was younger, I used to jump from the third step in my house to the ground for attention (positive reinforcement). This behavior would be dangerous to try to use shaping, which involved extinction, because extinction could possible involve an increase in the intensity of the behavior, and jumping from the fourth step or higher could result in serious injury.

State a rule for deciding when to move the learner to a new approximation.

Move on to the next step whne the learner performs the current step correctly in 6 of 10 trials, with 1 or 2 less perfect than desired, and 1 or 2 where the beahviour is better than the current step.

Why do we refer to positive reinforcement and operant extinction as principles, but to shaping as procedure?

Principles are procedures with a consistent effect and are so simple they can’t be broken down into simpler procedures (operant extinction and positive reinforcement), and are like laws.
Procedures are cominations of the principles of beahviour modification (shaping consists of two principles-operant extinction and positive reinforcement).


Describe how Scott and colleagues used shaping to decrease the heart rate of a man suffering from chornic anxiety?

They hooked up a video portion of a TV set to a heart rate monitor. The TV played sound, but only showed picture as positive reinforcement when the heart rate decreased for three sessions. It would need to systematically decrease in each level in order to show the picture.

Describe how computer technology may be used to shape specific limb movements in a paralyzed person.

It would provide be more precise, rapid and give systematic feedback as well as unlimited patience.

Describe how computer technology might be used to study shaping more accurately than can be done with the usual noncomputerized shaping procedures

Fast enough to make comaprisions and consistent application of shaping procedures. More accurate and faster especially in terms of topography.

Describe an experiment demonstrating that maladaptive behavior can be shaped.

Reinforcing rates with food for extending their noses over the edge of a platform. Over trials, they were required to extend their noses farther and farter over the edge before recieving reinforcement, and eventually they extended it so far that they fell off.

Define and give an example of intermittent reinforcement.

An arrangement in which a behavior is positively reinforced only occasionally, rather than every time it occurs. Eg. Jan is reinforced with praise for every 2 math problems she solves correctly.

Define and give an example of response rate

The number of instances of a behavior that occur in a given period of time. Eg. Jan solves 16 math problems in an hour.

Define and give an example of schedule of reinforcement

A rule specifying in which occurrences of a given behaviour, if any will be reinforced. Eg. It has decided that Jan will only be reinforced every 4 times she solves a math problem correctly.

Define CRF and give an example that isn’t in this chapter

Continuous reinforcement, an arrangement in which each instance of a particular response is reinforced. Eg Everytime you turn on the tap you are reinforced with water.

Describe four advantages of intermittent reinforcement over CRF for maintaining behavior.

a) The reinforcer remains effective longer because satiation takes place slower.
b) Behavoiur that has been reinforced intermittently takes longer to extinguish
c) Individual work more consistently on certain intermittent schedules
d) Behaviour that has been reinforced intermittently is more likely to persist after being transferred to reinforcers in the natural environment.


Explain what an FR schedule is. Illustrate with two examples of FR schedules in everyday life (atleast one of which is not in this chapter)

A fixed-ratio schedule, a reinforcer occurs each time a fixed number of responses of a particular type are emitted.
Example 1: John gets a hug when he sings the ABCs twice. This is FR2
Example 2: When Jason makes three baskets he gets a high five. This is FR3.


What is a free-operant procedure? Give an example

A schedule in which the individual is free to respond at various rates in the sense that there are no constraints on successive response.
Example: If Jan was given a worksheet with 12 math problems on it, she could have worked at a rate of one per minute, or three per minute, or some other rate.


What is a discrete-trials procedure? Give an example.

The individual isn’t free to respond at whatever rate they chose because the environment places limits on the availability of response opportunities.
Example: If a parent tells their child they can use the car after they have done the dishes after three dinners, the teenager can’t do the dishes in an hour, but has to wait, and respond at a maximum rate of doing the dishes once a day.


What are three characteristic effects of an FR schedule?

a) High steady rate until reinforcement
b) After reinforcement is a postreinforcement pause, the length of which depends on the value of the FR (The higher the value, the longer the pause)
c) Produce high resistance to extinction.


What is ratio strain?

Deterioration of responding from increasing an FR schedule to rapidly.

Explain what a VR schedule is. Illustrate with two examples of VR schedules in everyday life (at least one of which isn’t in this chapter). Do your examples involve a free-operant procedure or a discrete-trials procedure?

Variable-ratio, a reinforcer occurs after a certain number of a particular response, and the number of responses required for each reinforcer changes unpredictably from one reinforcer to the next. The number of responses required for each reinforcement varies around a mean value, which is specified.
Example 1: A fundraiser must get a donation from 1 in 5 people they call to get a sticker (reinforcement). This doesn’t mean get a donation from every 5th person exactly. This is VR 5, and is a free-operant procedure.
Example 2: Playing slot machines. You will win an average of 1 in 20 spins, is VR 20. This is free-operant


Describe how a VR schedule is similar procedurally to an FR schedule. Describe how it’s different procedurally.

Similar: a) Causes a high, steady rate of responding,
b) a fixed number of responses of a particular type are emitted.
Different: In VR, the number of responses required for each reinforcer changes unpredictabily from one reinforcer to the next. In FR, the number of responses needed for reinforcement is fixed.


What are three characteristic effects of a VR schedule?

a) Produces a consistent response rate
b) The bigger the ratio, the higher the response rate
c) A post-reinforcement pause is often not observed, because they can’t predict when the next reinforcement will occur.


Illustrate with two examples of how FR or VR might be applied in training programs (by training program, we refer to any situation in which someone deliberately uses behavior principles to increase and maintain someone else’s beahviour, such as parents to influence a child’s behavior, a teacher to influence student’s behavior, etc.) Do your examples involve a free-operant or discrete-trials procedure?


Example 1: Jennifer’s parents want her to do her chore of mowing the lawn, so they give her $10 once she has mowed the lawn 3 times. This is FR 3, and a discrete-trial procedure.
Example 2: Jake had hand surgery and is now learning how to use his hand again. To help him gain use of his fingers again, he is to turn the knob on a gumball machine. On average, 1 in 10 gumballs in the machine are black, and the black gumball gets him a toy. This is VR 10, and free-operant.


Explain what a PR schedule is and how PR has been mainly used in applied settings.

It is like a FR schedule, but the ratio requirement increases by a speficiied amount after each reinforcement. At the beginning of each session the ratio requirement starts back at its original value, and after a number of sessions, it the ratio requirement reaches a level called the break point where the individual stops responding completely.
The main application of PR is to determine how potent, powerful or effective a reinforcer is for a particular person. The higher the brak point, the more effective the reinforcer is in the treatment program.


What is an FI schedule?

Fixed-interval schedule, a reinforcer is presented following the first instance of a specific response after a fixed period of time. The only requirement for a reinforcer to occur is that the individual engage in the behavior after reinforcement has become available because of the passage of time. The FI size is the amount of time that must elapse before reinforcement becomes available. Eg. PVR-ing a show

What are two questions to ask when judging whether a behavior is reinforced on an FI schedule? What answers to those questions would indicate that the behavior is reinforced on an FI schedule?

a) Does reinforcement require only one response after a fixed interval of time?
Answer: Yes.
b) Does responding during the interval affect anything?
Answer: No.


Suppose that a professor gives an exam to students every Friday. The students’ studying behavior would likely resemble the characteristic pattern of an FI schedule in that studying would gradually increase as Friday approaches, and the students would show a break in studying (similar to a lengthy postreinforcement pause) after each exam. But this isn’t an example of an FI schedule for studying. Explain why.

Because The students must make more than one study response in order to receive a good grade, and responding before the interval does affect the result as it contributes to a good grade.

What is a VI schedule?

A reinforcer is presented following the first instance of a specific response after an interval of time, and the length of the interval changes unpredictably from one reinforcer to the next. It’s a response reinforced after unpredictable intervals of time. Eg. Checking email.

Explain why simple interval schedules aren’t often used in training programs

a) FI procedures produce long postreinforcement pauses
b) Though VI doesn’t produce long postreinforcement pauses, it generates lower response rates than ratio schedules do
c) Simple interval schedules require continuous monitoring of behavior after the end of each interval until a response occurs.


Explain what an FR/LH schedule is, and illustrate with an example from every day life that isn’t in this chapter

A schedule with a fixed ratio (reinforcer occurs each time a fixed number of responses of a particular type are emitted) and limited hold (a deadline for meeting the response requirement of a schedule of reinforcement)
Example: If Jessica makes 3 bracelets in 15 minutes, she gets a cookie. This is FR 3/LH 15minutes


Expalin what an FI/LH schedule is and illustrate with an example that isn’t in this chapter.

A fixed interval schedule (a reinforcer is presented following the first instance of a specific response after a fixed period of time) with a limited hold (a deadline for meeting the response requirement of a schedule of reinforcement).
Example: An online store has shorts on sale each day at 1pm, and the sale only lasts for 15 minutes.


Describe how an FI/LH schedule is procedurally similar to a simple FI schedule. Describe how it procedurally differs.

Similar: The reinforcer appears only after a fixed period of time.
Different: Unlike in simple FI, in FI/LH There is a limited period of time after the fixed period of time when the response must be performed in order to get the reinforcement.


Explain what a VI/LH schedule is? Illustrate with two examples from everyday life atleast 1 not in this chapter.

A variable-interval schedule (a reinforcer is presented following the first instance of a specific response after an interval of time, and the length of the interval changes unpredictably from one reinforcer to the next) with a limited hold (a deadline for meeting the response requirement of a schedule of reinforcement).
Example 1: The Timer Game: A timer was purchased which could be set to make a ding noise at any time interval between one to thirty minutes. Everytime the timer made a ding noise, if they were playing nicely they would get 5 extra minutes of t.v. Since they had to be cooperative the instant the alarm went off, they limited hold was 0 seconds. So it was VI 30/LH 0 seconds.
Example 2: Pancakes need to be flipped, at some point between 3-8 minutes, with the average time being 5 minutes. Once they’re ready to be flipped, you must do so within 20 seconds, or they will burn. This is VI 5 minutes/ LH 20 seconds.


Give two examples of how VI/LH might be applied in training programs

Example 1: The timer game in classrooms. If children are working quietly when the timer goes off they get extra free time. VI 30 minutes/LH 0 seconds.
Example 2: To train children to pay attention during lectures, a teacher holds up a green cue card at random times during the 30 minute class lecture, average once every 10 minutes. When she does so, the children have 5 minutes to write down the word she said as she held up the card. This is VI 10minutes /LH 5 minutes.


For each of the photos, identify the schedule of reinforcement that appears to be operating.

a) After an unpredictable amount of time, one gets their luggage.
This is variable-interval schedule
b) After a fixed number of stacking pieces on a pegboard all the pieces will be stacked.
This is a fixed-ratio schedule
c) After a fixed period of time of being in the dryer, the clothes come out dry.
This is a fixed-interval schedule
d) Enjoyable scene occurs on t.v. unpredictably and lasts briefly.
This is variable interval schedule with a limited hold.


Explain what an FD schedule is. Illustrate with two examples of FD schedules that occur in everyday life (with atleast one not in this chapter)

Fixed duration schedule: Reinforcer is presented only if a behavior occurs continuously for a fixed period of time. The value is the amount of time that the behavior must be engaged in continuously before reinforcement occurs.
Example 1: Melting solder-one must hold the tip of the soldering iron for a continuous amount of time, if it’s removed too quickly, it will cool too quickly.
Example 2: John must stay in the plank position for 1 full minute to pass his gym class.


Suppose each time you put bread in the toaster and press the lever, 30 seconds passes before your toast is ready. Is this an example of an FD schedule? Why or why not? Would it be an FD schedule if a) the catch that keeps the lever down doesn’t work. b) The timer that releases it doesn’t work. Explain in each case.
No, it is a fixed-interval schedule. After the first response (pressing down the lever), you wait a specific time after which the reinforcer is presented.


a) If the catch that holds the lever down is broken, then you must manually hold down the level continuously for 30 seconds, so this is a FD schedule.
b) If the timer isn’t working, it would be


Explain why FD might not be a very good schedule for reinforcing study behavior

The behavior must be a behavior that is easily measured continuously and reinforced on the basis of its duration. With studying, it is hard to measure how long the person is studying, vs how long they are doing something else, like daydreaming, texting, or reading a book instead of studying.

Give two examples of how FD might be applied in training programs

Example 1: Some children with developmental do not make eye contact with others, and when adults try to initiate it, they quickly avert their eyes. FD may be used to increase eye contact by reinforcing after a certain amount of time of maintaining eye contact.
Example 2: A teacher trying to teach a child piano may inforce this to increase the amount of time practicing by giving reinforcement after a certain amount of time that the child plays the piano.


Explain what a VD schedule is, and illustrate with an example of one from everyday life that isn’t from this chapter.

A variable-duration schedule has a reinforcer presented only if a behavior occurs continuously for a fixed period of time and the interval of time from reinforcer to reinforcer changes unpredictably. The mean interval is specified in the designation of the VD schedule.
Example: Rowing a boat from one end of a lake to the other on a day with no wind or current. The person must row consistently to reach their destination.


What are concurrent schedules of reinforcement? Give an example

When each of two or more behaviours is reinforced on different schedule sat the same time, the schedules of reinforcement that are in effect are called concurrent schedules of reinforcement.
Example: A person at home in the evening may have the choice of studying or watching TV.


If an individual has an option of engaging in two or more behaviours that are reinforced on different schedules by different reinforcers, what four factors in combination are likely to determine the response that the person will make?

a) The types of schedules that are operating.
b) The immediacy of reinforcement
c) The magnitude of reinforcement
d) The response effort involved in the different options.


Describe how intermittent reinforcement works against those who are ignorant of its effects. Give an example.

They may be unawakre that behavior may get worse before getting better, so they give into the behavior. This may cause a VR or VD schedule of reinforcement for undesirable beahviour.

Name six schedules of reinforcement commonly used to develop beahviour persistent

a) Fixed Ratio
b) Variable Ratio
c) Fixed interval with limited hold
d) Variable interval with limited hold
e) Fixed duration
f) Variable duration


In general, which schedules tend to produce higher resistance to extinction (RTE), the fixed or variable schedules?

Variable Schedules

Who wrote the classic authoritative work on schedules of reinforcement and what is the title of that book?

Fester and Skinner Schedules of Reinforcement

What may account for the failures to obtain the schedule effects in basic research with humans that are typically found in basic research with animals

Humans have complex verbal behaviours which is emitted and responded to. Humans can verbalize rules that may influence them to show different behavior patterns than animals show when exposed to various reinforcement schedules.
Humans may make statements to themselves about the schedule of reinforcement in effect and respond to them rather than the actual schedule.


Describe how FR schedules may be involved in writing a novel

Some novelists stop writing immediately after completing each chapter of a book, after a brief pause of a day or so they resume writing at a high rate, which was maintained until the next chapter is completed. Longer pauses typically occurred after a draft of a manuscript was completed. One may argue completed chapters and drafts are reinforcers for novel writing that occur according to FR schedules.

Might it be better to reinforce a child for dusting the living room furniture for a fixed period of time or for a fixed number of items dustsed? Explain your answer.

A fixed number of items, because the child may dust slowly and complete dust less items during the amount of time.

Briefly describe how schedules of reinforcement can help us understand behavior that has frequently been attributed to inner motivational states.

A VR schedule with a low reinforcement rate can account for highly persistent behavior eg. Dedicated student, or compulsive gambler.