• Shuffle
    Toggle On
    Toggle Off
  • Alphabetize
    Toggle On
    Toggle Off
  • Front First
    Toggle On
    Toggle Off
  • Both Sides
    Toggle On
    Toggle Off
  • Read
    Toggle On
    Toggle Off
Reading...
Front

Card Range To Study

through

image

Play button

image

Play button

image

Progress

1/52

Click to flip

Use LEFT and RIGHT arrow keys to navigate between flashcards;

Use UP and DOWN arrow keys to flip the card;

H to show hint;

A reads text to speech;

52 Cards in this Set

  • Front
  • Back
Classical Conditioning
Focuses of responses that are automatic and involuntary, responses that are not deliberate and don't require effort.
Also known as: Pavlovian, respondent, stimulus-response conditioning
Pavlov
focused on the links between stimuli and responses, "reflexes."
Unconditioned Reflex
When no learning has yet taken place and the link between the stimulus and response is inborn and automatic. Same for all members of the species.
US -> UR
Ex. Meat powder (US) -> salivation (UR), turning off lights (US) -> pupil dilation (UR)
Conditioned Reflex
Results from experience and LEARNING. Varies significantly among members of the species.
Conditioned stimulus (CS) -> conditioned reflex (CR)
How is a CR learned?
Stimulus that naturally evokes no automatic response (NS) is paired repeatedly with the unconditioned stimulus (US). the NS comes to elicit a response similar to UR.
ex. tone (NS) paired with meat powder (US) until tone -> salivation (CR). NS is named CS.
Temporal Sequence
the sequence in which the US and NS are presented makes the KEY difference in whether conditioning occurs. Not only the contiguity (closeness in time) but also the CONTIGENCY
DELAY CONTINGENCY
Standard Pairing: the CS precedes the US by a short interval and overlaps into the presentation of the US. The US appears to depend on the presentation of the CS.
TRACE conditioning
the CS preceds the US by a period of time and STOPS RIGHT BEFORE the US.
TEMORAL conditioning
the US is presented repeatedly at a consistent TIME INTERVAL. Eventually, TIME ITSELF BECOMES THE CS. Ex. Zoo feedings at regular times
SIMULTANEOUS CONDITIONING
the NS and US completely overlap. For example, meat powder (US) is presented at the exact same time as the tone (NS). NO LEARNING actually takes place. There is NO CONTIGENCY, the US does NOT DEPEND on the NS.
Backward conditioning
The US precedes the NS. Ex. the meat powder (US) is presented 1st, the tone (NS) follows. NO LEARNING occurs, there is NO CONTIGENCY.
STIMULUS GENERALIZATION
Mediated Generalization: the subject automatically generalizes from a conditioned stimulus (CS) to other similar neutral stimuli. Occurs AUTOMATICALLY without deliberate attempts.
Example: Baby Albert
HIGHER ORDER CONDITIONING
A DELIBERATE process in which a conditioned stimulus (CS) is paired with a neutral stimulus that is typically unrelated until eventually the NEW NS becomes a conditioned stimulus (CS) and elicits the conditoned response (CR).
ex. tone (CS1) is repeatedly paired with flash of light (NS), until salivating occurs (CR). Light = CS2 (2nd order conditioning)
impossible to condition beyond 3rd level.
CLASSICAL EXTINCTION
Results from repreatedly presenting the CS WITHOUT the US. Tone (CS) presented without the meat pwder (US), eventually dog stops salivating to tone (CR X)
Presentations of US without CS does not = extinction.
SPONTANEOUS RECOVERY
Following a rest period, the CR to the CS often briefly reappears, will vanish if extinction trials continue.
STIMULUS DISCRIMINATION
Animal learns to discriminate between two similar neutral stimuli because 1 has been paired with US while the other has not. If discrimination is made too difficult, they will experience EXPERIMENTAL NEUROSIS.
PSEUDO CONDITIONING
Occurs accidentally. A NS that was not deliberately paired with either the US or CS will elicit the CR.
HABITUATION
The subject become accustomed to and less responsive to a US after repeated exposure. (Ex. living next to train tracks) The US no longer elicits the UR. (ALWAYS INVOLVES UNCONDITIONED STIMULUS)
Operant Conditioning
Explains VOLUNTARY behavior, learned as a result of reward and punishment. AKA Skinnerian or instrumental conditioning. [THORNDIKE & BF SKINNER]
THORNDIKE'S LAW OF EFFECT
Behaviors initially emmitted in random trial & error fashion. behaviors followed by pleasurable consequences (rewards) become stronger & more frequent. Those that are followed by unpleasurable get weaker (punishment)- but he later deleted that part.
REINFORCEMENT & PUNISHMENT
REINFORCEMENT & PUNISHMENT are the consequences or the contingencies that follow a target behavior.
Reinforcement INCREASES
Punishment DECREASES
POSITIVE means something is ADDED (+)
NEGATIVE means something is SUBTRACTED or TAKEN AWAY (-)
Positive Reinforcement
REWARD.
something of value is given to the person, and brings them into a desirable state & increasing behavior.
Negative Reinforcement
RELIEF. After behavior is done, something annoying or aversive is removed- behavior will increase. I.e. Nagging, until seatbelt is on, will do it again just to stop the nagging.
Positive Punishment
PAIN. After behavior is done, something aversive is added, thus decreasing likelihood of behavior. After swearing, scolding.
Negative Punishment
LOSS. After behavior is done, something is taken away, decreasing behavior. TV is taken away after swearing.
Schedules of Reinforcement
Acquisition: period of new learning
Extinction: period of reinforcement withheld
Operant strength: measured by rate of responding (usually result of schedule used)
CONTINUOUS REINFORCEMENT
Reinforcing EVERY occurence of the behavior, like giving candy every time homework is done. Best schedule for acquiring new behavior.
SATIATION
Phenomenon of a reinforcer losing its value through overuse.
THINNING
Changing from continuous to an intermittent schedule of reinforcement.
INTERMITTENT REINFORCEMENT
in RATIO schedules, reinforcemnt is based on HOW OFTEN the behavior is done.
In INTERVAL schedules, reinforcement is based on TIME.
FIXED INTERVAL
Reinforcement happens the 1st time target the behavior is emitted after a FIXED TIME interval elapsed.
Response rate is LOW or nonexistent, and increases toward the end of the interval.
VARIABLE INTERVAL
Reinforcement occurs the 1st time the target behavior is emitted after an UNPREDICTABLE VARIABLE amount of time has elapsed.
Response rate is moderate and without pause.
FIXED RATIO
Reinforcement occurs after a certain, unchanging NUMBER of responses are emitted (i.e. factory work)
Response rate is typically moderate to high, and there may be a pause after reinforcement
VARIABLE RATIO
Reinforcement occurs after an UNPREDICTABLE NUMBER of responses is emitted (i.e. slot machines)
Response rate is high, with little pause
Rates of Responding during acquisition
Greatest operant strength 1. VR 2. FR 3. VI 4. FI
RESISTANCE TO EXTINCTION
Follows same pattern as operant strength.
1. VR 2. FR 3. VI 4. FI
PATTERN OF RESPONDING
SCALLOPED pattern happens in FIXED schedules, where there is a pause after reinforcement (1. FI 2. FR). Variable schedules are more smooth.
OPERANT EXTINCTION
Ceasing to reinforce behavior that was previously reinforced. Results in RESPONSE BURST, where the behavior will increase, but without reinforcement, will decrease.
SUPERSTITIOUS BEHAVIOR
Results from accidental reinforcement or from non-contingent reinforcement, or reinforcement applied in an aribtrary inconsistent fashion that is not linked to the emission of the target behavior. (ex. "lucky" socks)
DISCRIMINATION LEARNING
(stimulus control) In the Real World, target behaviors are reinforced in certain circumstance but not in others. Learns to DISCRIMINATE between those situations. The Stimulus that signals forthcoming reinforcement is DISCRIMINATIVE STIMULUS (SD), the stimulus that signals that reinforcement will NOT take place is called S DELTA.
Whining in front of grandma will get attention (grandma= S DELTA) while mom will ignore it (SD)
STIMULUS GENERALIZATION
When a subject beings to emit the target behavior in the presence of a stimuli SIMILAR TO but not exactly the same as the disriminative stimulus.
Ex. whiney kid whines to all old people
RESPONSE GENERALIZATION
Performing the BEHAVIOR that is SIMILAR but not identical to the one that was reinforced.
Dog does trick for a biscuit, later does a different trick.
PROMPTING
Cueing the subject about what behavior to perform.
i.e. "say thank you, johnny."
FADING
Reducing prompts from CUEING to maybe a "what do you say?" then to a Look.
SHAPING BY SUCCESSIVE APPROXIMIATIONS
Reinforcing a behavior each step of the way as a person gets closer and closer to desired behavior.
CHAINING
Compex sequence of behavior is reinforced at each step and serves as a cue (Discriminative stimulus) to perform the next behavior. Major reinforcement happens at the end of the chain. EVERY STEP MUST BE REINFORCED or the chain will stop.
PREMACK PRINCIPLE
A behavior that is freely performed at a high frequency typically has a strong reinforcing value.
Also known as Grandma's Rule, using a high frequency behavior to reinforce a low-frequency behavior ("eat your spinach [low freq] then you can go out and play ]high freq]")
BEHAVIORAL CONTRAST
2 behaviors are equally reinforced, then only one of the behaviors is reinforced. The behavior being reinforced will increase in frequency, while the other decreases.
SOCIAL LEARNING THEORY
(Theory of Observational Learning) According to this theory, human learning and behavior cannot fully be explained by behavioral principles of reinforcement (operant conditioning) or association (classical conditioning).
Learning occurs through observation and modeling.
The classic BOBO doll study by BANDURA & ROSS2 found children exposed to violent models tended to imitate the exact violent behavior.
BANDURA'S SOCIAL LEARNING THEORY
We do not perform behavior because it was reinforced in the PAST, but we anticipate FUTURE reinforcement (a cognitive activity).
RECIPROCAL DETERMINATION
an INTERACTIVE TRIAD of the person, their behavior and the environment regulates their behavior.
OBSERVATIONAL LEARNING (BANDURA)
1. attention (attending to the model)
2. retention
3. production
4. motivation