Operant Conditioning In Psychology

1258 Words 6 Pages
When an animal is conditioned to make an operant response to a definitive stimulus is referred to as discrimination training (Gray & Bjorklund, 2014). Herrnstein, Loveland and Cable (1976) argued that comparing the definitive stimuli commonly used in operant conditioning (100 Hz tones, 465 mµ lights) to real natural setting defy concept of responding to a specific stimuli. For example, in natural settings a squirrel will not only respond to acorns similar to the one presented on an image in a controlled lab experiment, the squirrels would respond to all colors and shaped acorns. They generalize the variety of acorns and discriminate other objects within their environment (Herrnstein, Loveland & Cable, 1976). In Herrnstein and others’ discriminative …show more content…
CSF are used for acquisition, meaning an individual is learning or acting a behavior for the first time. A real world example would be a toddler toilet training. If the parent’s goal is to teach the child to flush the toilet immediately after its use, implementing a CSF would be more effective. Once this behavior has been learned, it is held consistent by maintenance (reduces extinction); this is done by changing the schedule of reinforcement to an ISF in order for an individual to become less dependent on the reinforcer (Miltenberger, 2012). In an experiment conducted by Fester and Skinner (1957) pigeons were placed in mechanically enhanced cages with an illuminating light and a pecking button which would dispense food automatically with the desired behavior. By manipulating each observation, Fester and Skinner observed and recorded their data. Having multiple schedules of reinforcement, Fester and Skinner were able to categorize them into two separate “families”: variable and ratio (Fester & Skinner, 1957). One step under the hierarchy of schedules of reinforcement we have the four basic schedules of reinforcement: Fixed Ratio (FR), Variable Ratio (VR), Fixed Interval (FI) and Variable Interval (VI) (Fester & Skinner, …show more content…
In interval schedules, amount of responses do not signify when the reinforcement is given. The participant will be reinforced after a certain time has gone by. In a FI schedule, the time in each occurrence before the reinforcement is given remains consistent. If a FI is set to 20 seconds, the participant would not be reinforced at any time before the interval begins. Participants will be reinforced every 20 secs until the session ends. The participant would be reinforced in any interval whether or not the desired response was done. FI schedules typically demonstrates an increase in response towards the end of the interval (Miltenberger 2012). A classroom schedule is set on a FI schedule. Student become much more observant and attentive towards the time at the end of the class period and less attentive on the material (Alloway, Wilson & Graham,

Related Documents