• Shuffle
    Toggle On
    Toggle Off
  • Alphabetize
    Toggle On
    Toggle Off
  • Front First
    Toggle On
    Toggle Off
  • Both Sides
    Toggle On
    Toggle Off
  • Read
    Toggle On
    Toggle Off
Reading...
Front

Card Range To Study

through

image

Play button

image

Play button

image

Progress

1/41

Click to flip

Use LEFT and RIGHT arrow keys to navigate between flashcards;

Use UP and DOWN arrow keys to flip the card;

H to show hint;

A reads text to speech;

41 Cards in this Set

  • Front
  • Back

FR schedule characteristics

- Longer post-reinforcement pauses with higher ratios


- Faster rate of response with higher ratios

VR schedule characteristics

- Produces a steady rate of responding


- Faster rate of response for lower ratios


- Faster rate of responding than VI

Ratio strain

Ratio so high the subject no longer responds. Can be measured using a progressive ratio.

FI schedule characteristics

- If food is delivered regardless of behavior is it a Pavlovian timed schedule


- Produces a scalloped rate of response

VI schedule characteristics

- Produces a steady rate of responding


- VI schedule results in faster responding if there is no schedule of reinforcement

Limited hold

A period of time after a respond when the reinforcement is made available

Concurrent schedule of reinforcement

Experiment to test two different reinforcement schedules at the same time (eg. An FI and FR lever in a box)

Thorndike's law of effect

- A reinforcer only serves to strengthen or weaken an S-R relationship


- Highlights that at some point an S-R can become compulsory

Two-process theory

S-R association establishes the instrumental habit


S-O association establishes reward expectancies

Pavlovian-instrumental transfer test

Seperately training an instrumental S-R (Button press to picture on screen) and a Pavlovian S-O (color on screen to picture on screen) association and testing to see if together they have a synergistic effect (color on screen to increased pressing of one button)

R-O associations

Highlight the goal directed aspect of behavior

Proof of an R-O association

- Reducing R-O contingency reduces R


- Reinforcer devaluation reduces R (only affects R if the reinforcer is experienced in its devalued state)


- A rat that has acquired a taste aversion will still press the R1 lever until it is given a pellet and experiences revulsion. Then pressing R1 will go down

Incentive learning

Reinforcer devaluation cannot occur unless the subject has learned its value

Goal-directed vs habitual behavior

Goal directed is R-O (medial temporal lobe), but if you get distracted from the goal, S-R (striatum) takes over. The longer an association is trained the more it will become habitual



Eg. Subjects were given the task of learning a weather prediction task with/without a secondary task. Both learned well, but those with a secondary task had poorer explicit knowledge of the relationship

First phase of instrumental extinction

Extinction burst, frustration, increased variability in R

Second phase of instrumental extinction

Consistent decline in R

Spontaneous recovery

When the CR reoccurs after a period of time. Different from habituation SR because strength of response is not correlated with duration of time passed

Renewal

When a CR is not extinguished because the extinction and test contexts are different. The context serves to disambiguate the CS since acquisition and extinction create conflicting memories

Reinstatement

When the CR comes back after a single presentation of the CS without the US

Resurgence

When an extinguished instrumental R comes back as another response is being extinguished (train left, extinguish left, train right, extinguish right, left comes back)

Ways to enhance extinction

- More extinction trials


- Manipulating time between extinction trials/acquisition and extinction (short = fast w/ SR, long = slow w/o SR)


- Repeating extinction+test cycle


- Extinction training in multiple contexts


- Presenting extinction reminder cues


- Compounding extinction stimuli (if X, Y, and L undergo acquisition and extinction, during compound extinction LX will show higher R than Y, but during test LX will show lower R than Y)


- Priming (presenting CS w/o US after acquisition, but before extinction. Reduces SR if primed before the acquisition memory is consolidated)

Overtraining extinction effect

The stronger the S-R relationship, the faster it is extinguished

Magnitude of reinforcement extinction effect

Larger reinforcer = faster extinction

Partial reinforcement extinction effect

Extinction is faster after CRF than PRF because frustration drives instrumental extinction and frustration is more salient with a CRF than PRF

Sequential theory of PREE

Memories of no reward can also be a response cue. Explains why PRF is extinguished slower (trials that don't result in reinforcement continue to motivate responding)

Multiple schedule of reinforcement

Different schedules are in effect depending on which stimuli are present (eg. VI when keylight off, FR when keylight on)

Stimulus discrimination

When a subject responds different to different dimensions of a stimuli or two different stimuli

Stimulus generalization

When a learned response to one stimuli also occurs in response to another stimuli. Usually results in a gradient of responding centered on the original stimuli

Reinforcement belongingness

A compound stimulus (light+tone) can have varying amounts of stimulus control depending on the reinforcement (food vs. shock)



Can result in overshadowing where one aspect of the compound stimulus is more strongly trained than the other (eg. light to food and tone to shock)

Stimulus element approach

Stimuli are processed as many individual aspects

Configurable cues approach

Processing cues as a whole

Positive patterning discrimination

A-/B-/AB+

Negative patterning discrimination

A+/B+/AB-

Stimulus discrimination training

Teaching a subject to differentiate an S+ and S- using reinforcement



Phase 1 = indiscriminate responding to S+ and S-


Phase 2 = reduced responding to S- and increased responding to S+

Spence's theory of discriminative learning

S+ gains excitatory response properties while S- gains inhibitory response properties

Intradimensional discrimination

Learning to differentiate two stimuli that are the same except for one aspect

Peak-shift effect

A shifting of the net generalization gradient further past the S+ when the S+ and S- are similar due to overlap between the S+/S- generalization gradients

Theory of relational learning

Stimuli do not have an absolute value (excitatory or inhibitory), but a relative one (more/less excitatory/inhibitory)

Stimulus equivalence

Training to treat different stimuli the same (eg. an actual dog, the written word dog, and the spoken word "dog")

Common response training

Initial training --> A & B = R1


Reassignment --> A = R2


Test --> B = R2?

Conditional relations

Significance of a dmstimulus is dependent on the status of another stimulus



Modulator = a stimulus that signals the relation between two events


Facilitation = when one cue signifies that another cue will be reinforced