Study your flashcards anywhere!

Download the official Cram app for free >

  • Shuffle
    Toggle On
    Toggle Off
  • Alphabetize
    Toggle On
    Toggle Off
  • Front First
    Toggle On
    Toggle Off
  • Both Sides
    Toggle On
    Toggle Off
  • Read
    Toggle On
    Toggle Off
Reading...
Front

How to study your flashcards.

Right/Left arrow keys: Navigate between flashcards.right arrow keyleft arrow key

Up/Down arrow keys: Flip the card between the front and back.down keyup key

H key: Show hint (3rd side).h key

A key: Read text to speech.a key

image

Play button

image

Play button

image

Progress

1/34

Click to flip

34 Cards in this Set

  • Front
  • Back
Learning (Conditioning):
Learning is a relatively permanent change in behavior or knowledge that comes from experience or training.
Behaviorism
: The school of thought that stresses the need for psychology to be an objective science. In other words, that psychology should be a science based on observable (and only observable) events, not the unconscious or conscious mind. This perspective was first suggested and propagated by John Watson in 1913, who wanted psychology to study only observable behaviors and get away from the study of the conscious mind completely. Watson's primary rationale was that only observable events are verifiable and thus, are the only events that can be proven false. This is an extremely important concept for science; without it, how can you ever find out what is true, false, real, or fake.
Unconditioned Stimulus:
In classical conditioning, an unconditioned stimulus (US or UCS) is any stimulus that can evoke a response without the organism going through any previous learning; the response to the US (the unconditioned response) occurs naturally. For example, if you smell a lemon, it might get a sour taste in your mouth and you may salivate. This may occur from the time you are born and can occur without you ever having tasted a lemon before. The lemon, therefore, is a US since it produced the salivation and sour taste (the UR) naturally, without you having any previous experience with lemons.
Unconditioned Response:
In classical conditioning, there are stimuli that can produce responses themselves and without any prior learning. These types of stimuli are called unconditioned stimuli (US or UCS) and they evoke unconditioned responses (UR or UCR), or responses that are completely natural and occur without an organism going through any prior learning. For example, if you smell a lemon, it might get a sour taste in your mouth and you may salivate. This may occur from the time you are born and can occur without you ever having tasted a lemon before. The salivation and sour taste would be unconditioned responses
Conditioned Stimulus:
In classical conditioning, a formerly neutral stimulus that, after association with an unconditioned stimulus (US), comes to produce a conditioned response. For example, a dog salivates (UR) from the smell of a bone (US) naturally, without any conditioning. Once some neutral stimulus (for example, a "beep" that the dog would not naturally or normally cause the dog to salivate) has been paired with the bone for some time, the dog will salivate (CS) when the "beep" occurs. Once the beep has the capacity to elicit the salivation, it is now considered a conditioned stimulus (CS).
Conditioned Response:
In classical conditioning, the conditioned response (CR) is the learned response (reflexive behavior) to a conditioned stimulus (CS). This response is almost identical to the Unconditioned Stimulus except that now the reflexive behavior occurs in response to a conditioned stimulus as opposed to an unconditioned stimulus. For example, a dog salivates (UR) from the smell of a bone (US) naturally, without any conditioning. Once some neutral stimulus (CS) (for example, a "beep" that the dog would not naturally or normally cause the dog to salivate) has been paired with the bone for some time, the dog will salivate (CS) when the "beep" occurs.
Classical Conditioning:
First proposed and studied by Ivan Pavlov, classical conditioning is one orm of learning in which an organism "learns" through establishing associations between different events and stimuli. For example, when a neutral stimulus (such as a bell) is paired with an unconditioned stimulus (such as food), which produces some involuntary bodily response all on its own (such as salivating), the neutral stimulus begins to trigger a response by the organism similar (some salivation) to that produced by the unconditioned stimulus. In this way, the organism has "learned" that the neutral stimulus equals something good (just like the unconditioned stimulus).
Extinction:
Extinction is from conditioning and refers to the reduction of some response that the organism currently or previously produced. In classical conditioning this results from the unconditioned stimulus NOT occurring after the conditioned stimulus is presented over time. In operant conditioning it results from some response by the organism no longer being reinforced (for example, you keep getting your dog to sit on command, but you stop giving it a treat or any other type of reinforcement. Over time, the dog may not sit every time you give the command).
Spontaneous Recovery:

Higher Order Conditioning:

Stimulus Generalization:
This is a classical conditioning term that refers to a situation in which a stimulus that was previously neutral (e.g., a light) is paired with a conditioned stimulus (e.g., a tone that has been conditioning with food to produce salivating) to produce the same conditioned response as the conditioned stimulus. Wow…if you understand how a neutral stimulus becomes a conditioned stimulus (conditioning), you understand higher order conditioning because this is really just extending the conditioning one more level...the conditioning is happening not by pairing the stimulus with something that naturally produces a response, but with something that has been conditioned to produce a response
Stimulus Discrimination:
resembles one involved in the original conditioning; in classical conditioning, it occurs when a stimulus that resembles the CS elicits the CR.
Operant Conditioning:
The tendency to respond differently to two or more similar stimuli; in classical conditioning, it occurs when a stimulus similar to the CS fails to evoke the CR.
Reinforcement:
Operant Conditioning is a process of learning where organisms learn to repeat behaviors for a positive outcome or repeat behaviors to avoid a negative outcome. Operant condition is why underground electric fences (for example) keep dogs in the yard. The dog learns not to cross a certain point in order to avoid an electric shock. A more common example is the Skinner Box where an animal will learn to trip a number of levers in order to get to a treat. Some Operant Conditioning principles/terms that you should get familiar with include Conditioned Stimulus (CS), Unconditioned Stimulus (US or UCS), schedules of reinforcement, and much more
Punishment:
Reinforcement is a process that increases the frequency of a targeted behavior by either using a negative stimulus or a positive stimulus. An electrical shock (negative stimulus) can make a human jump (targeted behavior) just as well as a surprising someone with a million dollars (positive stimulus). In this example, electrical shock and money are both reinforces even though one is considered unpleasant by most. As reinforcers, both increase the probability that the behavior (in this case, jumping) will occur again.
Primary Reinforcer:
Any stimulus that represses a behavior. It is important to note that punishment is not the same as negative reinforcement. Is failing a test negative reinforcement or punishment? If it motivates you to study more it is negative reinforcement (i.e., it increases the behavior of studying). However, if you feel that studying is actually hurting your performance (due to, for example, test anxiety) you will perceive that failing the test was due to studying too hard. Next time, you will not study (i.e., decrease your behavior) so that you will not be punished for it. Now you just need to convince your professor that bad grades are actually causing you to study less.
Primary Punisher:
This is a term used in conditioning, and it refers to anything that provides reinforcement without the need for learning to an organism. This means that the reinforcer is naturally reinforcing to the organism. For example, water is naturally reinforcing because organisms don't need to learn to be reinforced by it, they naturally get reinforced especially in times of being thirsty
Secondary Reinforcer:
A stimulus that is inherently punishing; an example is electric shock.
Secondary Punisher:
Unlike primary reinforcers, which are naturally reinforcing, secondary reinforcers are reinforcing only after the organism has been conditioned to find it reinforcing. Some stimulus that does not naturally provide reinforcement is paired with a primary reinforcer so that the organism begins to associate the secondary reinforcer with the primary reinforcer. For example. If you recall the Pavlov's dog case, the dog naturally salivated to the presence of meat powder. The meat powder serves as a primary reinforcer. But then pairing a sound with the meat powder over and over again, the sounds became reinforcing to the dog because it had been associated with the primary reinforcer (meat powder
Positive Reinforcement:
A stimulus that has acquired punishing properties through associations with other punishers.
Negative Reinforcement:
A stimulus, which increases the frequency of a particular behavior using pleasant rewards. A doggy treat can pleasantly coerce your new puppy to sit (positive reinforcement) just as a pull to the choke collar can achieve the same affect (negative reinforcement). The difference is that the positive reinforcer is pleasant, but make sure you understand that both increase the frequency of the behavior!
Fixed-Ratio Schedule:
With Negative reinforcement removing an unpleasant stimulus increases the occurrence of a behavior. For example, your dog can avoid being spanked when it sits in response to your command. If the dog has been getting spanked, not getting spanked is rewarding (removal of unpleasant stimulus) so the frequency of the behavior will increase. People confuse negative reinforcement with punishment--just remember that with reinforcement you increase the occurrence of the behavior but punishment extinguishes a behavior.
Variable Ratio Schedule (VR):
With this type of operant conditioning reinforcement schedule, an organism must make a certain number of operant responses (whatever it may be in that experiment) in order to receive reinforcement. For example, if you are conducting a study in which you place a rat on a fixed-ratio 30-schedule (FR-30), and the operant response is pressing the lever, then the rat must press the lever 30 times before it will receive reinforcement. This type of schedule is called fixed because the number of operant responses required remains constant.
Fixed-Interval Schedule:
This is going to be a little confusing at first, but hang on and it will become clear. A variable ratio schedule (VR) is a type of operant conditioning reinforcement schedule in which reinforcement is given after an unpredictable (variable) number of responses are made by the organism. This is almost identical to a Fixed-Ratio Schedule but the reinforcements are given on a variable or changing schedule. Although the schedule changes, there is a pattern - reinforcement is given every "N"th response, where N is the average number of operant responses. Let's give an example. You conduct a study in which a rat is put on a VR 10 schedule (the operant response is pressing a lever). This means that the rat will get reinforced when it presses the lever, on average (and this "on average) is the key), every 10 times. However, because it is an average, the rat may have to press the lever 55 times one trial, then only 2 times the next, 30 the next 50 the next, 1 time the next, and so on....just as long as it all averages out to reinforcement being delivered every 10 lever presses. See, it wasn't that bad.
Variable Interval Schedule:
With this type of operant conditioning reinforcement schedule, an organism must wait (either not make the operant response, whatever it is in that experiment; or it can make the response but the response produces nothing) for a specific amount of time and then make the operant response in order to receive reinforcement. For example, if you are conducting a study in which you place a rat on a fixed-interval 30 second schedule (FI-30s), and the operant response is pressing the lever, then the rat must wait for 30 seconds, then press the lever, and it will receive reinforcement. This type of schedule is called fixed because the amount time the organism must wait remains constant. In addition, the investigator can determine what NOT waiting will do. If the rat presses the lever before the interval has elapsed, it can either make the interval start all over again (so if the rat waits 15 seconds and then presses the lever, it starts the 30 seconds all over again), or do nothing so that the rat can press the lever constantly for 30 seconds, and then the next one will produce reinforcement
Shaping:

Successive Approximations:
This is a behavioral term that refers to gradually molding or training an organism to perform a specific response (behavior) by reinforcing any responses that are similar to the desired response. For example, a researcher can use shaping to train a rat to press a lever during an experiment (since rats are not born with the instinct to press a lever in a cage during an experiment). To start, the researcher may reward the rat when it makes any movement at all in the direction of the lever. Then, the rat has to actually take a step toward the lever to get rewarded. Then, it has to go over to the lever to get rewarded (remember, it will not receive any reward for doing the earlier behaviors now…it must make a more advanced move by going over to the lever), and so on until only pressing the lever will produce reward. The rat’s behavior was “shaped” to get it to press the lever.
Instinctive Drift:
Let’s use the definition of “shaping” to explain successive approximations. Our definition of “shaping” is: “a behavioral term that refers to gradually molding or training an organism to perform a specific response by reinforcing any responses that come close to the desired response. For example, a researcher can use shaping to train a rat to press a lever during an experiment (since rats are not born with the instinct to press a lever in a cage during an experiment). To start, the researcher may reward the rat when it makes any movement at all in the direction of the lever. Then, the rat has to actually take a step toward the lever to get rewarded. Then, it has to go over to the lever to get rewarded (remember, it will not receive any reward for doing the earlier behaviors now…it must make a more advanced move by going over to the lever), and so on until only pressing the lever will produce reward. The rat’s behavior was 'shaped' to get it to press the lever. In this example, each time the rat is rewarded, it is being rewarded for a "successive approximation", or for acting in a way that gets closer and closer to the desired behavior.
Behavior Modification:
During operant learning, the tendency for an organism to revert to instinctive behavior.
Extrinsic Motivation:
A type of behavioral therapy in which the principles of Operant Conditioning (reinforcement, punishments, etc.) are used to eliminate some type of unwanted, maladaptive, behavior. For example, a person may feel that they no longer want to smoke (the maladaptive behavior) and so the person is given a favorite piece of candy every time a cigarette is desired but refused. So, when the person wants a cigarette but does not have one, they get a piece of their favorite candy as a reward.
Intrinsic Motivation:
Why do you work, go to class, or study for a test? Do you do it because you want to money, a degree, and good grades? If so, you are extrinsically motivated - motivated to perform specific behaviors to achieve promised outside rewards or to avoid punishment from others. You are not working at a job because you get a great feeling of personal satisfaction from it or because it makes you feel good about yourself (that you are a good person), but rather to gain some kind of reward. We are not saying there is anything wrong with this. We are only trying to explain the concept to you.
Latent Learning:
Why do you work or come to class or study for a test? Do you do it because you want to money, a degree, and good grades? If so, you are extrinsically motivated - motivated to perform specific behaviors to achieve promised outside rewards or to avoid punishment from others. However, if you are working at a job because you get a great feeling of personal satisfaction from it, and you are trying to perform the behavior for its own sake (not for money), then you are intrinsically motivated. We are not saying that this is better or worse than extrinsic motivation, only different. Intrinsic motivation does seem to be more satisfying to people though. People who are extrinsically motivated tend to be less satisfied and become unhappy more easily (in general, not always).
Social-Cognitive Theories:
The type of learning that occurs, but you don't really see it (it's not exhibited) until there is some reinforcement or incentive to demonstrate it. This may seem a bit silly, but it is important to understand that there is a difference between learning and performance. For example, if you are in a car going to school with a friend every day, but your friend is driving all the time, you may learn the way to get to school, but have no reason to demonstrate this knowledge. However, when you friend gets sick one day and you have to drive yourself for the first time, if you can get to school following the same route you would go if your friend were driving, then you have demonstrated latent learning.
Observational Learning:
Theories that emphasize how behavior is learned and maintained through observation and imitation of others, positive consequences, and cognitive processes such as plans, expectations, and beliefs.The process of acquiring information by observing others. Learning to tie your shoe by observing another individual perform the task would be an example of observational learning.