Study your flashcards anywhere!

Download the official Cram app for free >

  • Shuffle
    Toggle On
    Toggle Off
  • Alphabetize
    Toggle On
    Toggle Off
  • Front First
    Toggle On
    Toggle Off
  • Both Sides
    Toggle On
    Toggle Off
  • Read
    Toggle On
    Toggle Off

How to study your flashcards.

Right/Left arrow keys: Navigate between flashcards.right arrow keyleft arrow key

Up/Down arrow keys: Flip the card between the front and back.down keyup key

H key: Show hint (3rd side).h key

A key: Read text to speech.a key


Play button


Play button




Click to flip

29 Cards in this Set

  • Front
  • Back
We adapt to stimuli-such adaptation is a form of habituation.
Ex.: when we stop noticing the ticking of a clock
Classical conditioning
A procedure in which a neutral stimulus is repeatedly paired with a stimulus that already triggers a reflexive response until the previously neutral stimulus alone evokes a similar response.
Unconditioned stimulus (UCS)
The automatic, unlearned reaction to a stimulus.
Conditioned stimulus (CS)
The new stimulus beig paired with the unconditioned stimulus.
Conditioned Response (CR)
The response that is elicited.
The quick relearning of a conditioned response after extiction.
Spontaneous Recovery
The reappearance of the conditioned response after extinction.
Stimulus generalization
After a conditioned response is acquired, stimuli that are similar but not identical to the conditioned stimulus also elicit the response - to a lesser degree.
A neutral stimulus and an uncondition-ed stimulus (UCS) are paired. The neutral stimulus becones a conditioned stimulus (CS), eliciting a conditioned response (CR).
Stimulus discrimination
Generalization is limited so that some stimuli similar to the conditioned stimulus do not elicit the conditioned response.
The conditioned stimulus is presented alone, without the conditioned stimulus. Eventually the conditioned stimulus no longer elicits the conditioned response.
Edward L. Thorndike, an American psychologist.
Thorndike was studying animals' intelligence and ability to solve problems. He tested animals like cats with a puzzle box. The cats' learning was governed by the Law of effect.
Instrumental learning
Thorndike described responses that produced discomfort to be less likely to be repeated. Responses are strengthened when they are instrumental in producing rewards.
Positive reinforcement
These are events that strengthen a response if they are expereinced after the response occurs.
Negative reinforcers
These are stimuli such as pain, threats, or a disapproving frown that strenghthen a response if they are removed after the response occurs.
B. F. Skinner
Skinner called the basic process of instrumental conditining operant conditining. In operant conditioning the organism is free to respond at any time, and conditioning is measured by the rate of responding.
Escape conditioning
Learning to make a response that ends a negative reinforcer.
Avoidance conditioning
Learning to make a response that avoids a negative reinforcer.
Decreasing the frequency of behavior by either presenting unpleasant stimulus or removing a pleasant one.
Reinforcement may be delivered on a continuous reinforcement schedule or one of four basic types of partial, or intermittent, reinforcement schedules:
Fixed ratio (FR), variable ratio (VR), fixed interval (FI), and variable interval (VI).
Albert Bandura
Found that children who saw aggression showed that aggression in play.
Learned helplessness
Appears as the result of people when they belive that their behavior has no effect on the world.
Premack Principle
When parents allow a teenager to use the car after he mows the lawn, they are offering activities high on the teenager's preference list to reinforce performance of activities lower on the hierarchy.
Disequilibrium hypothesis
Is illustrated by imagining a man who prefers eating over all other activities.
Fixed-ratio (FR) schedules
reinforcement after a fixed number of responses.
Variable-ratio (VR)
call for reinforcement after a given number of responses
Fixed-interval (FI) schedules
provide reinforcement for the first response that occurs after some fixed time has passed since the last reward
Variable-interval (VI) schedule
reinforce the first response after some period of time, but the amount of time varies.
This is accomplished by reinforcing successsive approximations, responses that come successively closer to the desired response.