Study your flashcards anywhere!

Download the official Cram app for free >

  • Shuffle
    Toggle On
    Toggle Off
  • Alphabetize
    Toggle On
    Toggle Off
  • Front First
    Toggle On
    Toggle Off
  • Both Sides
    Toggle On
    Toggle Off
  • Read
    Toggle On
    Toggle Off

How to study your flashcards.

Right/Left arrow keys: Navigate between flashcards.right arrow keyleft arrow key

Up/Down arrow keys: Flip the card between the front and back.down keyup key

H key: Show hint (3rd side).h key

A key: Read text to speech.a key


Play button


Play button




Click to flip

43 Cards in this Set

  • Front
  • Back
1. Between vs. Within Experiment
In Between Subject Experiments subjects are assigned to either a test or control group using stratified sampling. The experimental and control groups match each other at the very beginning of the experiment. This method is supposed to guarantee that the subjects have the same backgrounds, attitudes, and motivations. In this type of experiment it is important that the control and test groups are well matched. Randomization, when done properly, should provide similar groups.

In Within Subject Experiments, each person acts as their own control, receiving one treatment and then another or a control in random order. This presents a challenge because the order in which the treatments are received can potentially bias a subject towards one treatment or another.

This form of comparison created a bias in the example study of self-help tapes. The order in which the tapes were played made a significant difference in the results. The subjects were more relaxed in the second test simply because it was the second test, and not due to the factor being tested: the self-help tapes.

Both types of experiments can potentially be biased. In Between Subject Experiments, the control and treatment groups are not a perfect match. The differences in the results could have potentially been there before the experiment began. In Within Subject experiments, the control and treatment groups are the same, a perfect match, but this type of test could introduce potential confounds in how the test is run.
2. Correlation vs. Causation
Correlation is the tendency of 2 variables to vary together: if one goes up as the other goes up, the correlation is positive; if one goes up as the other goes down, the correlation is negative. Causation is the idea that because x happens, y happens. Just because things are correlated, it does not mean that there is causation.
3. Associative vs. Non-associative Learning
Non-associative learning is a process where a single unconditioned stimulus results in an unconditioned response. It is the simplest form of learning where habituation, dishabituation and sensitization can affect motor response with prolonged exposure. An example of non-associative learning is swatting (UR) at a fly (US) that lands on one's skin.

Associative learning involves multiple stimuli, one that is unconditioned and another that is conditioned in conjunction. A conditioned response is produced, one that the subject would not normally have had before, given the same situation. An example of associative learning is the classical Pavlovian case where the dog is conditioned to salivate (eventually the CR, normally the UR when given food) upon hearing the bell ring (CS), all the while associating that the bell equals food (US), which is not present.

Unconditioned Response (UR) = product of the organism's biology and largely independent of any learning.
-every unconditioned reflex is from hardwiring connection btwn an unconditioned stimulus.

Conditioned Response (CR) = product of body's learning.
-to measure the strength of the body's learning look at response amplitude, ex: saliva secreted when the CS was present without the US or the probability of response: the proportion of trials in which the CR occurs when the CS is presented alone

extinction: the CR will gradually disappear if the CS is repeatedly presented by itself--without the US,
--can be undone through reconditioning-- which means more learning trials. the speed of learning the second time around should be faster than the first time
-- spontaneous recovery can occur if the CS is presented after the extinction period, sometimes the CS elicits the CR, even though the CR was extinguished earlier

stimulus generalization: organism will respond to a range of stimuli, provided that these stimuli are sufficiently similar to the original CS
-stronger response if stimuli is more similar to original CS

discrimination: animal must be able to differentiate between the correct stimuli and things that are similar to it...ex: lion and cat are similar, must be able to discriminate that the lion is different than the cat so they won't pet it...

Habituation refers to the decline in an organism's tendency to respond to a stimulus once the stimulus has become familiar.
4. Habituation vs. Sensitization
In psychology, habituation is an example of non-associative learning in which there is a progressive diminution of behavioral response probability with repetition of a stimulus. It is another form of integration. What happens is this: an animal first responds to a sensory stimulus, but if the stimulation ends up being neither rewarding nor harmful, the animal learns to gradually suppress its response through repeated encounters, causing the animal’s reaction to become smaller and smaller with every stimulus. Sensitization is a change in the behavioral or biological response produced by an organism caused by the delivery of a strong, generally noxious, stimulus.

Experiments were performed on Aplysia californica, a sea slug, in order to investigate the workings of habituation and sensitization and what happens in the nervous system of the animal when each occurs. If the siphon of the animal is stimulated, the animal is withdrawn. The action occurs because the stimulus activates receptors in the siphon, which activates the motor neuron via an interneuron. A simple reflex circuit to say the least. With repeated activation, however, the stimulus receives less and less of a reaction from the animal because it no longer registers the stimulus as a possible threat- it has become accustomed to it. In order for a reaction to be seen again, the animal can be “sensitized”. In the sensitization of the aplysia, a large shock was applied. After this shock, the aplysia will again respond to stimulus (often even more vigorously than one typically would). Sensitization is a primitive form of learning and memory.

Dishabituation: When the habituation cycle is broken due to a change in the environment or the stimulus itself.
5. Classical Conditioning vs. Operant Learning
Classical conditioning is the form of learning in which a stimulus that previously elicited no response, a conditioned stimulus, is paired with an unconditioned stimulus. What is learned is the relationship between the two stimuli. Pavlov’s experiment is the well known example of classical conditioning in which dogs began salivating when they heard a bell.

Operant learning is also known as instrumental learning and it is a form of learning in which a reinforcer, like a treat, is given only if the animal performs the instrumental response. The relationship between a specific response and a reinforcer is learned.

The main difference between classical conditioning and operant learning is that in operant learning a reward is given only if the proper response is given (like a dog given a treat for the right trick). In classical conditioning the unconditioned stimulus is present no matter what the animal does, ie. The dog is fed whether it salivates at the sound of bells or not.
6. Hyperpolarization vs. Depolarization as it applies to firing an action potential
When explaining a neuronal action potential, it is first important to understand some basic information about the cells. To begin, a cell's resting potential lies at about -70mV. The inside of it has a negative charge, while the outside has a positive one. Furthermore, each cell membrane has ion channels which allow the ions to pass through, and ion pumps, that push the ions out. Two ions in particular are crucial for maintaining a resting potential and conversely, reaching an action potential: sodium and potassium.

Depolarization is when the membrane of a cell is sufficiently irritated by some excitatory input and the ion channels of the membrane open temporarily, allowing positively charged sodium (Na+)ions to enter the cell. This excess of positively charged particles inside of the cell destabalizes the membrane, changing the negative charge on the inside to a positive one. If there is enough temporal summation (input received at the same time) or spatial summation (input arriving at the same area)of this positive charge, the voltage will reach the excitation threshold (about -55mV in mammals), and immediately cause an action potential. It is important to note that action potentials work off of the "all or none" law, meaning that if the stimulus is strong enough to reach the excitation threshold, then an action potential will propogate. If it does not reach this point, however, no action potential will take place. A neuron either fires or does not fire, there is no in-between.

Immediately following an action potential, the membrane begins to restore itself to stability. Thus, Hyperpolarization occurs when the Na+ ions are forced out of the cell while potassium (K+) ions begin to seep back into the neuron. This goes to the point where the negative charge increases the potential beyond a cell's resting potential to about -120mV. Hyperpolarization is produced by inhibitory synaptic activity to make ir more difficult for a cell to reach its firing threshold. Soon after, however, the cell restores itself to -70mV to start the process all over again.
7. Afferent vs. Efferent Pathways
Afferent and efferent pathways are descriptors for the types of pathways between the Central Nervous System and the rest of the body. Afferent nerves carry information from sense organs (i.e. sight, touch) to the brain. Efferent nerves carry information from the Central Nervous System to muscles and glands that carry out actions. All nervous pathways are either afferent or efferent, with the exception of the cranial nerves (those nerves that control vision, olfaction, audition, etc.), which are both afferent and efferent.

An easy way to remember which is which is that afferent pathways go "at" the CNS while efferent pathways "exit" the CNS.
8. Parasympathetic vs. Sympathetic Nervous System
The parasympathetic and the sympathetic nervous systems are divisions of the autonomic nervous system. Both divisions of the ANS are responsible for maintaining subconscious physiological functions of the visceral organs (i.e. the heart, lungs, blood vessels, digestive systems, sexual organs, etc.). The parasympathetic division, also known as the “rest and digest” system, directs vital processes such as the digestion of food and the elimination of wastes. It regulates the heart rate and blood pressure, keeping them at stable levels, while it constricts the pupils for close vision and increases the activity of the gastrointestinal tract. Basically, it creates a state of relaxation for the body. On the contrary, the sympathetic division, also known as the “fight-or-flight” or “feed and breed” system, is responsible for increasing activity during threatening situations, emergencies, and times of excitement. For example, it increases heart rate, dilates pupils, decreases the activity of the GI tract, and so forth. It basically works to cope with situations that threaten homeostasis, and it places the body in an aroused state.
9. Antagonist vs. Agonist Drug
Communication between the neurons in our body is dependent on neurotransmitters. Agonist and antagonist drugs affect the type and amount of neurotransmitters released and taken back up. Both influence neurons at the synapse (or space between neurons) and both work by mimicking or blocking the actions of neurotransmitters, and affecting their receptors.

Agonists enhance the activity of the neurotransmitter. This can happen in various ways. One way is to increase the transmitter's effect by blocking the reuptake. This leaves more transmitter in the synapse that can be taken in by the other neuron. Another way is the increase the availability of the precursor that is required for the neurotransmitter's chemical manufacture. The more of the precursor there is, the more of the neurotransmitter can be made.

Antagonists impede activity of the neurotransmitter. Opposite of the agonists, antagonists do this by 1) speeding up reuptake 2) decreasing the availability of precursors or 3) by binding themselves to the synaptic receptors. This blocks off the transmitter from the receptor. the transmitter and the receptor are similar to a lock and key model. Antagonists binding themselves to the receptors is like putty being put into the synaptic 'lock'

Antagonists and agonists drugs effect the efficiency of communication between neurons in the body through neurotransmitters. Some help in the world of psychology (usually the legal drugs), having effects such as relief of depression, or anxiety. And some tend to harm the individual (usually the illegal drugs) causing such effects as psychosis or paralysis.
10. Skinner’s Negative Reinforcement vs. Punishment
The Positive refers to the Addition of a stimulus to the subject.
The Negative refers to the Removal of a stimulus.

Reinforcement is used to get the response to be increased, (ie using Reinforcement for a child’s good grades)

Using Positive Reinforcement: Getting good grades would result in the application of a satisfier, such as extra dessert or a new toy.

Using Negative Reinforcement : Getting good grades would result in the removal of a annoyer, such as not having to do the dishes after dinner or not having to take the trash out.

Punishment is used to get a decrease in a response, (ie using Punishment to get a child to not misbehave)

Using Positive Punishment: More misbehavior would mean application of an annoyer, such as extra chores or spankings.

Using Negative Punishment : More misbehavior would lead to the removal of a stimulus, such as not more TV or no more dessert.
11. Perception vs. Sensory Process
Sensory process is the relatively meaningless bits of information sensory organs receive and transduce into electric signals that the brain receives. For example, it is the basic breakdown of frequencies in a word. By themselves, these frequencies have no worth or meaning to the listener.

Also, sensory process affects all senses—vision, audition, taste, smell, touch, proprioception (kinesthesis), and the vestibular sense—but it only affects one sense at a time.

In contrast, perception is the meaningful sensory experiences that result after the brain combines hundreds of sensations. For example, the words created by the combined frequencies have definitions and significance to us.

How are these two related? Raw information about the external world and our relation to it (sensory process) is transferred into proximal stimuli that can be perceived in our brain (perception). In the example above, the various frequencies making up the word stimulate our ears and are transduced and synthesized into various words which our brains can understand.

Regarding perception, there are two theories of how information is transduced to perception (how sensory process is related to perception): The Empiricists believed this was accomplished by the association of multiple stimuli during sensory processing. They believed that one learns the associations through experience. The Nativists believed that the way we transducer was preset and that categorization of sensory information leads to our perception. They believed that the brain naturally and innately knows the associations.
12. Opponent Processing Theory vs. Lateral Inhibition
“Opponent Processing Theory is a theory of color vision that proposes three pairs of color antagonists: red-green, blue-yellow, and white-black. Excitation of one member of a pair automatically inhibits the other member” (B15). It is based on the perceptual phenomena of color vision. Which hue a person experiences depends on these two opponent process pairs: the red-green ‘team,’ and the blue-yellow ‘team.’ The ganglion cells and cells in the LGN respond to the two pairs of colors; when excited they signal red and blue, and when inhibited they signal green and yellow. For example, if they are excited, we perceive the violet hue (mixture of red and blue). On the other hand, if each team is in balance then the color will be achromatic (without hue). However, if one team player (red, green, blue, or yellow) is stronger, then that is the unique hue we see (not a mixture like violet).

On the other hand, “Lateral Inhibition is the tendency of adjacent neural elements of the visual system to inhibit each other; it underlies brightness contrast and the accentuation of contours” (B12). Basically, it is a conflict between perception and sensory processes, for it allows us to process and perceive what we are seeing by taking in the sensory stimulus. It demonstrates the idea that sensory processes are not just passive recorders, but synthesizers, of stimulus input. Lateral inhibition contributes to the organization and interpretation of input. Without that, things like edge detection would not occur. The best explanation of what physically happens comes out of the book: “when any visual receptor is stimulated, it transmits its excitation to other cells that then relay this signal to the brain. But the receptor’s excitation…[also] stimulates neurons that extend sideways along the retina. These lateral cells make contact with neighboring cells and inhibit their activation” (189). So, it is ultimately a physiological mechanism that permits the visual system to locate the edges that define an object’s shape.

Overall, while both the opponent-process theory and lateral inhibition deal with contrast and perception, they differ in that the opponent-process theory takes care of hues, whereas lateral inhibition has to do with defining and interactions in space.
The only things I would add to Opponent Processing theory is to remind you of the demonstration we did in class where the opposing colors would reverse on the white screen after you stared at the colored figures for a period of time. Also, color is determined by light energy exciting the 3 types of cones. How these three types of cones are excited will cause the ganglions they are connected to, to either signal the color (red or blue) or to not signal which is interpreted as (green or yellow).
Lateral Inhibition allows for edge detection and increases the signal to noise ratio which refines the retina's ability to localize small objects in space. This is best accomplished in the fovea because of the increase in the amount of cones that have small receptive fields. Because of this items in the periphery appear fuzzy and out of focus, and this is why your eyeball continually moves even when you think it is still.
13. Habituation vs. Saturation for a Sensory Receptor’s jnd
Habituation is stimuli specific. It is the decline in an organism's tendency to respond to a stimulus once the stimulus has become familiar. Whereas, Saturation is constant or repeated exposure to a stimuli to the point where the body can no longer elicit a response.

How this is related to just noticeable difference:
When a person is trying to detect just noticeable difference (jnd)in a sound, the subject may become habituated to the sound from the exposure. So they will no longer react to that sound but will react when the sound is distinctly different from the original. For saturation to occur, the subject must be exposed to the same sound constantly until the person can no longer elicit a response to that specific sound, but then a sound of different pitch can be played to determine what the subject perceives as a just noticeable difference.
14.Higher-Order processing vs. Top-down modulation
Higher order processing is made possible by the frontal lobe where the motor strip is located and is where we derive many of our executive controls which include things like attention, perception, planning, making decisions, understanding the consequences of our actions, etc. Higher order processing involves things like attention and when attention affects how we respond to stimulus on a basic sensory level that then becomes top down modulation.

Top down modulation is the phenomenon where a person can be primed for a stimulus and can also ignore one sensory modality and consequently, attune total concentration to another sensory modality in order to quicken response times.

An example of top down modulation when abilities like attention and perception made possible by higher order processing affect a basic sensory modality would be language. Dyslexia and aphasias can generally be attributed to the frontal lobe. Often, a person with dyslexia actually perceives the language that they hear or speak differently from others. Because higher order processing elements like perception and attention affect a basic sensory modality like audition, this can be described as a top down modulation.
15. Behavior vs. Mental Process
Psychology is defined as the systematic and scientific study of behaviors and mental processes. Behavior refers to the observable actions and responses of humans and animals. These events are measurable and quantifiable and include the visible actions of a person. On the other hand, mental processes are research topics that are not directly observable. They rely heavily on inference, language, and self-report, making them more subjective than behaviors. Mental processes include thinking, emotions, and dreaming.
16. Explicit vs. Implicit Memory
Explicit memory is the conscious recognition of previous events or experience. It is information or a fact that the person actively and consciously commits to memory. The information is processed when the subject recognizes information and utilizes processing that is conceptually driven. It is also referred to as top-down processing. Recall is influenced by the way the subject commits the data to memory, for example if the subject is influenced by their emotions, the context of the situation, etc

Implicit memory is the unconscious, non-intentional, procedural form of memory. This is when a subject is affected by some past experience or conditioning without realizing that they are actually remembering. The information is imprinted in the subject’s memory in a similar manner to the way in which it is perceived. Explicit memory is also called data driven or bottom-up processing. The subject does not realize that they are committing data to memory and show difficulty forcing memory recall. However, when primed the recollection comes easily because the subject doesn’t need to think about it.

Main differences between implicit and explicit memory
Explicit and implicit memory tasks tap into different types of memory in the brain. Implicit memories are automatic in nature and influence the subject, whether the subject realizes or wants it to. On the contrary, explicit memories are within the subject’s control. As the text book states, “we can choose whether or not we will use information contained in the memory.”
17. Skinner vs. Chomsky on Language Development
Skinner’s view (which is the Behaviorist view) of language is that it is just another (albeit complex) behavior, to be explained by general learning principles; in other words, environment shapes action. This view is based on the ideas of classical and operant conditioning, including the proximity rule (the association of frequently paired syllables lead to word recognition, word and object frequency associations) and the communication pressure hypothesis (Dale, good communication and learning new vocabulary is reinforced by satisfying child’s needs faster). The evidence for the idea that language is learned is that language differs in different cultures. The support for instrumental learning is that, for one, children will never learn words if they are not exposed to them and, two, Mother/Parentese (baby talk) is slow, repetitive and easy to understand, which is optimal for instrumental learning. If you believe in this behaviorist theory, then you believe that is possible to teach someone a language at any given time (as long as the person isn’t cognitively impaired), using the principles of classical and operant learning, but they will not be able to learn language if they are not exposed to it.
Chomsky’s view (which is the Nativist view) is that language is genetic/innate, unique and species specific. He thought of himself as a “Cartesian linguist” because he viewed humans as qualitatively different from other animals. Many Nativists believe that there is a Language Acquisition Device (LAD) which is an innate mechanism in the brain (Brocke’s or Wernicke’s areas) that constrains the way a child learns language. The idea is that linguistic experience gets mapped in the brain into innate, cognitive categories or constraints. This has been proven FALSE by the examples of people who have had whole hemispheres of their brain removed but are still able to language during the "critical period". The "critical period" theory is that that language can only be learned during a small portion of time (pre-puberty). This theory is substantiated by the fact that people have great difficulty learning language if they are not , for some reasons, exposed to it before a certain age. Some evidence for the idea that language universal is that, one, baby talk is a universal interaction between infnats and adults, two, children learn a vast amount of words, an fact that favors a “one trial learning” phenomena (as opposed to regular learning), and three, there are linguistic universals, such as the fact that all infants show the same development of language (babble -> 1, and then 2, word telegraphic speech). The innate qualities of syntax learning are evidenced by the fact that children are able to extract grammatical laws and make sentences that they have not heard or learned before despite their limited exposure to sometimes flawed adult linguistic performance, and parents’ focus on content (not grammar) when correcting speech. If you believe in the Nativist view of language learning, then you believe that everyone has the ability to learn language, but that ability is lost (or severely weakened) once they pass a certain age.

Who is right? Well, the easy answer is that they are both right. From Skinner’s side, general learning principles seem to be easily applicable to the learning of vocabulary, the use of language, and a special form of processing for syntax learning. Chomsky’s side can easily be supported, however, by pointing out that these general learning principles may just be part of our innate cognitive ability to learn/generalize rules.
18. Broca’s vs. Wernike’s Aphasia
Broca’s and Weirneke’s Aphasia are two conditions associated with complications in two certain areas of the brain involved in speech production and language comprehension; the conditions are in fact named after the specific areas of the brain that are affected.

Broca’s area is located in the frontal lobe of the brain (on the left side near the front) and is involved with speech production. Broca’s area is what some experts say is “what makes us who we are”, as it is the area that allows humans to comprehensively organize and put together words and sentences to convey their thoughts to others. Therefore, Broca’s Aphasia involves a problem with this due to some form of brain damage to the area, typically caused by lesions on the area. People with Broca’s Aphasia understand speech and written words but have problems producing fluent speech. An example of this is someone saying “Yes, sure, me go, er, uh, PT non o’cot, speech…two times…read…wr… ripe, er rike, er, writie… practice…get-ting better.”

Wernike’s area is located in the temporal lobe (the area used for hearing and speech and near the center on the left side of the brain), and it is involved in language comprehension. Wernike’s area is what allows humans to comprehend and understand the words and sentences that they hear from other people. Wernike’s Aphasia also involves brain damage to this area; it causes difficulty in understanding spoken and written speech and also in producing meaningful sentences, causing people to create a form of “word salad.” Someone with this condition would say something like “You know, once in a while I get caught up, I mention the tarripoi, a month ago, quite a little, I’ve done a lot well.”
19. Expert vs. Novice Problem Solving
Novice refers to an introductory level of knowledge in a given field. That could include a student or someone who is not directly involved in that given field. They have only minimal knowledge in that field. Novices use a “plug-and-chug” approach.

Experts are those who are better and expertly trained in one field and have devoted much of their time focusing on that topic. They are ones that can be considered an expert in that field, hence the term “expert.” Experts define the problem and plan out solutions before manipulating equations. Experts are more likely to check for reasonableness.

More information: Having a set of well- practiced subroutines is an important part of “expertise,” but experts have several different advantages when compared to those of more modest skill. For one, experts simply know more in their domain of expertise, and some theorists have suggested that this is why someone usually needs a full decade to acquire expert status, whether proficiency is in music, software design, chess, medicine or any other domain. Then years is simply the time needed to acquire a large enough knowledge base so that the expert has the necessary facts near at hand and the necessary subroutines well practiced and available. In addition, an expert’s knowledge is heavily cross- referenced, so that each bit of information has associations with many other bits. As a result, not only do experts know more, they al so have faster, more effective access to what they know.

Differences: Experts have a different sort of knowledge than novice. Experts knowledge focuses on higher- order patterns. As a consequence, experts can, in effect, think in larger units, tackling problems in big steps rather than small ones. In other words, experts can proceed subroutine by subroutine, rather than needing to assemble all the procedural steps from scratch.

Studies: Consider studies in chess players (Chase & Simon, 1973a, b; de Groot, 1965). In one study, players at different levels of expertise were posed various chess problems and asked to select the best move. All of the masters chose to continuations that would have won the fame, while few of the other players did. Why? Many theorists believe that the reason lies in the way the players organized the problem. The chess masters structured the chess positions in terms of broad strategic concepts from which many of the appropriate moves follow naturally. In effect, the masters have a “chess vocabulary” in which these complex concepts are stored as single memory chunks, each with an associated set of subroutines for how one should respond to that pattern. Some investigators estimate, in fact, that the masters may have as many as 50,000 of these chunks in their memories, each representing a strategic pattern.

The only thing that I would add is under the "differences" section, to say that with respect to problem solving, more experienced individuals will "auto chunk" into their usual segments and quickly solve similar problems whereas novices are unable to do this. This allows experts to solve problems faster and more effectively. I know you alluded to this, but I just wanted to clarify.
20. Inductive vs. Deductive Reasoning
Inductive reasoning is the ability to abstract a general rule or hypothesis from specific instances. This process relies on several heuristics; however, these heuristics can cause error in a person’s reasoning. For example, an individual may seek evidence that will confirm his or her hypothesis. This is called the Confirmation Bias. They can also be overly influenced by instances they can easily remember, rather than the complete data. This is called the Availability Heuristic and is most prevalent with cases that had a strong emotional impact on an individual. Also, people can generalize too broadly and this is called the Representativeness Heuristic. This can occur when we need to classify things and something does not fit into a specific category. People will sometimes approximate it with the nearest class available. Lastly, an individual may judge the attraction of events differently depending upon whether those events are framed as gains or losses. This is called the Framing Effect.
Deductive reasoning starts with a general rule that we know to be true. From this rule we are able to make a conclusion about something specific. It has been studied through the use of syllogisms and the selection task. In both cases, performance is often quite poor, but there are circumstances in which reasoning appears quite logical. For example, a syllogism occurs when there is a major premise, a minor premise, and a conclusion. For example, All humans are mortal, the major premise, I am a human, the minor premise, therefore, I am mortal, the conclusion. These concrete examples aid in better performance. There are two hypothesis that come under selection task. One hypothesis is the evolutionary hypothesis and asks what sorts of reasoning our ancient ancestors might have needed. The day-to-day hypothesis says we reason well when a problem evokes the sort of reasoning we need to do in our day-to-day activities.
21. Altruism vs. Deindividuation
Altruism is the act of providing a service or deed for another individual at some personal cost of your own. In animals it is seen as more of a biological necessity than as a display of kindness. However, in some animals (and in humans) the idea of reciprocity comes into play. Being found in nearly all cultures, reciprocity is performing a deed or service in order to receive a similar deed or service in the near future. This is important for persuasion, as can be seen with bargaining and self-disclosure, in which one person discloses personal information in order to establish trust with another individual. Basically altruism is the Golden Rule: “Do unto others as you wish done unto you,” that is, if you do right by others, then they will do right by you if and when you are in need.

Deindividuation is a weakened sense of personal identity in which self-awareness is diminished and one’s own goals are merged in the collective goals of a group. It tends to disinhibit impulsive actions that are normally under restraint. In deindividuated cultures, such as Japan, a collective society is optimal; for example, asking Japanese college students what makes them happiest will get you the reply of having a positive social engagement, such as being friendly and having connection to and respect for one another.

Another example of Deindividuation is what happens in crowds when riots or mobs start because in this situation a person is losing their individualism and then behaving how they would not necessarily normally behave.

The Altruism that is commonly associated with the Golden Rule is also referred to more specifically as Reciprocal Altruism.
22. Schema vs. Heuristic
Stereotypes in what they describe are best classified as a schema relating to role and person, group, or profession. When asked to define a stereotype what do you use to describe it? As Wiley Coyote, I have a stereotype that all ACME rockets will allow me to catch a road runner. Therefore, ACME rockets are successful and worth the money. These things are describing the role of the rocket, which better classifies stereotypes as schemas
Heuristics are the general rules that permit the maintanence of stereotypes. Specifically, the availability heuristic brings to mind examples that easily come up, which are usually affirming the stereotype and support your opinion (confirmation bias). This makes sense since items that would disagree with the schema of the stereotype would cause cognitive dissonance. Also, the representative heuristic supports stereotypes because it will take the few examples that confirm the schema of the stereotype and apply them with a broad stroke across all the individuals in the group.
23. Piaget’s Adaptation: Assimilation vs. Accommodation
- Sucking, swallowing, head/eye movements – 1st mental categories or schemas through which infants organize sensory world
- Child extends schemas, learns to integrate more complex ways of dealing w/ world
- Adapting to the world & new skills through assimilation and accommodation
Assimilation – children use mental schemas they already have to interpret environment – objects in environment assimilated into schema – schemas change as child gains experience in interacting w/ world
Ex] sucking nipple, sucking rattle is assimilated – however sucking rattle is not same as sucking nipple, so sucking schema must adjust to new object – further accommodation
Accommodation - difference made to one's mind or concepts by process of assimilation.
*Note – assimilation/accommodation go together: you can't have one without the other.
Example: St. Louis people think Detroit people are dumb. St. Louis person meets Detroit person who is very smart, St. Louis person accommodates and says not all Detroit people are stupid by assimilating and realizing that this guy is not stupid

The only thing I would add would be that accommodation could include either changing completely your belief or assimilating new information into that belief. So in your example, the St. Louis person could assimilate by saying that not ALL Detroit people are dumb or by saying that Detroit people aren't stupid at all.

It may be helpful to include that assimilation and accommodation are keys to cognitive development. Also assimilation is responsible for incorporating objects into a child's schema, whereas accommodation is necessary for generalizing and for classifying objects and behaviors.
24. Piaget vs. Erickson Development (generally)
Jean Piaget (1896 – 1980) spent the majority of his life studying how knowledge grows, using his three children as his primary study tool. His particular insight dealt with the power of simple maturation (growing up) on understanding; a child cannot comprehend certain concepts until he is psychologically old enough to do so. Through his observations, Piaget concluded that children's thinking does not develop entirely smoothly: instead, there are certain points at which it "takes off" and moves into completely new areas and capabilities. Piaget’s evidence showed that no matter how bright a child may be, he will not be capable of understanding certain concepts until he passes a developmental milestone and moves into a new stage of operation. Piaget’s four stages of cognitive development were:

I) Sensorimotor Stage – (birth to 2 years)
• Differentiates self from others
• Recognizes self as capable of action and acts intentionally
e.g. Shakes a rattle to make a noise
• Achieves object permanence
e.g. Realizing that a toy moved under a pillow still exists, even though it is out of sight

II) Preoperational Stage – (2 to 7 years)
• Development of language
• Egocentrism
Thinking is self-based, trouble moving to others’ viewpoints
• Classification by a single feature
e.g. Grouping all the red toy cars together

III) Concrete Operational Stage – (7 to 11 years)
• Logical thinking about things and events
• Conservation of number (age 6), mass (age 7), and weight (age 8)
e.g. Realizing that a taller, skinnier glass can have the same amount of liquid as a shorter, fatter glass
• Classifies objects according to several features

IV) Formal Operational Stage – (11 years and up)
• Logical thinking about abstract ideas
• Concern with hypothetical, future, and ideological problems

The accumulating evidence suggests that this scheme is too rigid – many children manage concrete operations much earlier than Piaget originally thought, and some people, especially in non-Western cultures, never attain formal operations (or at least are not called upon to use them).
Erik Erikson (1902 – 1994) worked at the same time as Piaget, but came up with a more socially-based theory that used many of Freud’s original ideas. Erikson rejected Freud’s attempt to describe personality solely on the basis of sexuality, but agreed with Freud’s concepts of the id, ego, and superego. He also continued along the timeline of maturation to include developmental stages beyond the point of adolescence, and believed that personality continues to develop beyond age 5. Criticisms of his theory have noted is broader range and inclusion of cultural and social aspects of development, but have also noted that he did no statistical research to generate his theories, and that they are very hard to test for validation. Erikson’s eight stages are each based around a psychological conflict and include:
I) Trust v. Mistrust (birth to 1.5 years)
• Relationship: mother
• Virtues: hope, faith
• Maladaptation: sensory distortion, withdrawal

II) Autonomy v. Shame and Doubt (1.5 to 3 years)
• Relationship: parents
• Virtue: will
• Maladaptation: impulsivity, compulsion
III) Initiative v. Guilt (3 to 6 years)
• Relationship: family
• Virtues: purpose, courage
• Maladaptation: ruthlessness, inhibition
IV) Competence v. Inferiority (6 years to puberty)
• Relationships: neighborhood and school
• Virtue: competence
• Maladaptation: narrow virtuosity
V) Identity v. Role Confusion (adolescence)
• Relationships: peer groups, role models
• Virtues: fidelity, loyalty
• Maladaptation: fanaticism, repudiation
VI) Intimacy v. Isolation (early adulthood)
• Relationships: partners, friends
• Virtue: love
• Maladaptation: promiscuity, exclusivity
VII) Productivity v. Stagnation (middle age)
• Relationships: household, workmates
• Virtues: care
• Maladaptation: overextension, rejectivity
VIII) Integrity v. Despair (later years)
• Relationships: mankind or “my kind”
• Virtue: wisdom
• Maladaptation: presumption, despair
Piaget also believed that assimilation (taking material in from the environment) and accommodation (a change made to the mind or concepts due to the process of assimilation) were the basis of cognitive development, especially in the sensorimotor stage. Children use these to develop schemas and integrate these schemas with, or apply them to their environments.

Furthermore, Piaget conducted his studies through observation. So his stages are simply observations, not descriptions of how children actually move through them. Children go through his stages at different rates. Erickson on the contrary, described his stages on the basis of overcoming cognitive dissonance. At each of his stages, a conflict is faced, and the individual moves on after overcoming the conflict. Each age faces a cognitive dissonance at a certain level, and the child will grow according to how he or she resolves the issue.
25. Cupboard Theory vs. Harlow’s Results on Attachment
The cupboard theory states that infant’s attach to their mothers because she is a source of nourishment (whether through breast or bottle). As stated in our Chapter 13 notes, the cupboard theory says “if you feed it, it will cling to you.”

Harry Harlow used newborn rhesus monkeys to test attachment. He raised these infant monkeys in separate cages without their mothers. In each monkeys’ cage, he placed two stationary figures representative of a mother. One was composed of wire and the other of soft terry cloth. He equipped the wire mother figure with a nipple that yielded milk, but the soft terry cloth mother had no form of nourishment. According to the cupboard theory, the rhesus monkeys would cling to the wire mother alone because she provided food. Harlow’s findings, however, largely discredited the cupboard theory of attachment. The baby monkeys spent more time on the terry cloth mother than on the wire mother, especially when frightened. Harlow concluded, in conjunction with John Bowlby’s theory of infant attachment, that monkey infants love their mothers because of the comfort she provides, not the food.

In later studies conducted by Harlow, rhesus monkeys were reared in isolation without any outside contact. The monkeys reared in isolation withdrew, huddled, rocked, and bit themselves. As adult monkeys, they were inept in marriage and courtships, showed no love for their offspring, and were violent. This experiment supported the importance of attachment in monkeys as well as humans. Without any attachment (even a simple substitute such as the terry cloth mother) monkeys and humans will grow up socially and emotionally inadequate.
26. Self-efficacy vs. conformity
Self-efficacy is the sense a person has about what things he can plausibly accomplish. It is an individual’s estimate of his ability to cope with a situation, and outcome expectancy; an individual’s estimate of the likelihood of certain consequences occurring. This combination of assessments of potential threat and coping resources determines how anxious an individual may become in a given situation. A high self-efficacy for a task may mean that you are more likely to try it out.

Conformity is correspondence in form or appearance with certain accepted standards.
In psychology, conformity is the degree to which members of a group will change their behavior, views and attitudes to fit the views of the group. The group can influence members via unconscious processes or via overt social pressure on individuals. For example, a famous experiment with conformity involved many individuals agreeing that a green line was in fact purple and seeing if the experimentee would agree.

Self efficacy also attributes failure to insufficient effort or deficient knowledge and skills which are acquirable. Self efficacious people approach situations with assurance that they can exercise control over them.
27. “g” vs. Multiple Intelligences
“g” Factor of Intelligence:

-The WAIS-R is one of the most widely used intelligence tests- consists of subtests: information or general knowledge (I), comprehension (C) , arithmetic (A), and vocabulary (V).
-The scores on each subtest are strongly correlated with the scores of the other subtests.
-This pattern suggests that there is a common factor present in all of the subtests since individuals who perform very well or very poorly are consistent in every subtest.
-To assess this pattern, Charles Spearman created a technique called factor analysis to measure the common factor that is shared among all of the tasks.
-Factor analysis is a statistical technique that extracts a measurement of the common factor that seems to be shared among all of the tasks.
-He proposed that this is general intelligence or (g), a common factor that influences all tasks. For example, individuals who have a lot of (g) have an advantage on most intellectual tasks.
-(g) contains two elements: fluid intelligence- mental speed and flexibility, and crystallized intelligence- list of skills and facts retained.
-Non intelligence elements consist of motivation, attitudes towards testing, etc.
-NOT included in (g): motor, perceptual, musical, practical, relating to others, and creative abilities.
-Heavily influenced by Western culture.

Multiple Intelligences:

-At least 7 kinds of intelligence: Verbal, Logical-Mathematical, Musical, Spatial, Body Movement, To understand oneself, To understand others.
-Developed by Howard Gardner, based on the consideration of individuals with special talents or special deficits.
-Evidence comes from studies of patients with brain lesions that retard come abilities while sparing others.
-Also supported of idea of savant syndrome which is when an individual has a single extraordinary talent, even though they are otherwise disabled.
-Advantages: does not restrict intelligence to a single measure, explains brain lesion findings and cases of idiot savants.
-Disadvantages: not knowing how many definitive types of intelligences there are, not having a standard measure to assess the different kinds of intelligences.
28. Fluid vs. Crystallized Intelligence
Fluid intelligence is defined as the efficiency and speed of intellectual functioning, often focusing on topics that are entirely new to an individual. Crystallized intelligence is defined as a person’s accumulated knowledge over time, including vocabulary, facts, and strategies.

In aging, these two follow a negative correlation. As a person gets older, they accumulate more and more crystallized intelligence from life experiences. At the same time, one loses fluid intelligence over time, as they are less able to think quickly and efficiently.

Perhaps fluid intelligence should be expanded to note that it requires some strategy choices to be made and decisions about how to proceed. In that light fluid and crystal intelligence are more 'related' in that they both require strategy to different degrees.

As aging is important, the book notes that the decline begins in one's 30s but decreases slowly and often isn't noticeable until one is in their 60s or 70s.

Declines in intelligence have 3 factors attributed to such losses: varies among individuals so biological, education/stimulation in life, age-related decline in working memory.

Medical conditions are also important in decline given certain conditions lead to a decrease in oxygen to the brain etc.
29. Nature vs. Nurture
Nature – The idea that characteristics of an individual such as IQ are genetic, which means intelligence is either in your genes or it isn’t.

Nurture - Things such as IQ don’t necessarily come from your genes; you are a product of your environment. If you live in an impoverished environment then you are much more likely to do worse on an IQ test than someone who has been given many advantages in life.

Immigration laws in the United States limited the number of people from each country that could come into the United States based on IQ scores from that country. However, the people from those countries that took the IQ tests could not necessarily speak English and the tests were administered in English. The United States had assumed that because of their nature, other people from their country were of the same intelligence as the people that they gave the test to.

Studies have shown that the average IQ score for someone from an impoverished environment is on average 10-15 points below the real average, and that people who are well off have IQ scores that are 10-15 points above the average, which supports the idea that nurture, not nature is responsible for these differences.

Often characteristics of a person are a product of a combination of their genetics and their environment.

Nature- the idea that characteristics of an individual such as IQ are genetic, which means intelligence is either in your genes or it isn’t and that that intelligence can be passed on.

Nurture- things such as IQ don’t necessarily come from your genes; you are a product of your environment. if you live in an impoverished environment then you are more likely to have a lower IQ score than someone who has been well off because of lack of proper nutrition or adequate living conditions. This lack of an ideal environment can harm your brain from growing to its full potential.

In the past, immigration laws in the US limited the number of people from each country that could enter the US based on their IQ scores. It was wrong to judge people based on low IQ scores because people from those countries that took the IQ tests could not necessarily speak English and the tests were administered in English and were biased towards Western beliefs and social norms. The US assumed because of the idea of nature that if a few people from a foreign country did poorly on the IQ test, then the whole country possessed that lower intelligence.

Studies have shown that the average IQ scores for someone from an impoverished environment is between 10-15 points below the average, and that people who live in very well off areas have IQ score 10-15 points above the average. This is because your genes provide a potential range of intelligence. Whether or not you get the proper nutrition or live in a good environment free from stress and other factors that affect our living, you will tend to end up in the lower or higher end of that range. This supports the idea that nurture, not nature is responsible for these differences.

Often characteristics of a person are a product of a combination of their genetics and their environment.
30. Id, Ego, and Superego Relationship
Id - The Id is the first personality subsystem to develop. It contains two biological drives; Sex and aggression. The Id's main goal is to pursue pleasure and satisfy the drive by any means necessary; it operates on the pleasure principle.

Ego - The Ego grows out of the Id, as the ego grows it tries to negotiate between the id's wants and desires and the superego's prohibitions.

Superego - The Superego develops from the ego as the child matures and satisfies wishes while following rules and regulations. The superego represents the internalized rules and regulations of the parents/caregivers and society. This is where pride and guilt come from; if the ego lives up to the superego's standards-pride, if the superego's rules are broken guilt or shame is felt.

The Relationship:
The Ego is like the person in a cartoon with an angel and devil on each shoulder. The devil being the id full of desires and wanting to satisfy the desires at all cost. The superego is the angel warning the person to be safe and morally sound in their decision. The ego being the person has to decide between the id's desires and the superego's warnings to figure out the best possible solution to satisfy the disagreement of the two and relieve anxiety.

The ID is the source of all mental energy, i.e. creativity and mental drive. It also operates completely in the unconsciousness.

The ego is both conscious and unconscious.

Superego is mostly unconscious
31. Behaviorism vs. Social Psychology
Behaviorism is a psychological approach that focuses on an individual’s behavior. Personal motivation (especially Maslow’s hierarchy of needs) helps define the reasoning behind a person’s actions. Classical and operant conditioning are also essential in developing an individual’s learning. While behaviorism does try to uncover a person’s intentions, it does not consider the social factors that social psychology involves.

Kurt Lewin established the field of social psychology. He determined that social situations significantly control human behavior. While social psychology looks at how a person’s environment influences them, it also looks at how that person perceives his or her environment. Our self-concept influences how we perceive social situations, and social situations also influence how we see ourselves. Aspects of social psychology include schemas (person, role, and event), the bystander effect, persuasion, and aggression. Social psychology can explain many actions that behaviorism does not account for.

Overall this is a good definition of Behaviorism and Social Psychology; just make sure to know that Social Psychology tries to understand human behavior in a broader social context; also, Maslow's hierarchy for motivation for behavior, starting from most important to least important in influencing behavior, is Physiological (food, water, sex, sleep), Safety (Protection from harm), Love and Belonging (Affiliation and acceptance by others), Esteem (Achievement, competency, gaining approval and recognition), and Self-actualization (Fulfillment of one's own unique potential).
32. Psychoanalytical vs. Trait Personality Theory
Trait personality theory focuses on measuring traits and is based on the idea of developing a standard set of qualities used to define personalities and describe individuals. The main five traits used in these classifications are: openness, conscientiousness, extraversion, agreeableness, and neuroticism. One of the unique aspects of trait theory is that it is applicable cross-culturally and employs characteristics general enough to define humans overall, regardless of culture and geographical differences.

Psychoanalytical personality theory focuses on the work of Freud and his classifications of the id (pleasure principle), ego (reality principle), superego (limits scope of the id through applying moral values and standards of society in satisfaction of wishes), and defense against anxiety. This theory focuses more on unconscious effects of the id and the impact of social acceptability in developing personality. It also addresses psychosexual stages of development which can result in fixation, and thus also affect personality. These approaches differ in the fact that trait theory claims the causes that develop personality are based traits that grow out of an individual’s temperament, while psychoanalytical theory bases these personality differences on fixation, and the mind’s unconscious struggle between achieving pure pleasure and staying in the bounds of social norms. These theories also rely on different testing methods to understand different personalities. Trait theory relies on objective testing to determine which traits are dominant in an individual, and psychoanalytical theory uses mainly projective tests to uncover unconscious thoughts and feelings.
33. Inclusion vs. Exclusion Criteria in DSM-IV
The DSM-IV entry for a disorder list both inclusion criteria, which explains which, and for how long, symptoms must be present in order to qualify for a diagnosis of a certain mental disorder, and exclusion criteria, which list what must not be present if one is to make that diagnosis. Taken together, these two criteria form a relatively accurate encapsulation of a disorder. For example, inclusion criteria for Schizophrenia include hallucinations which occur during one month or more and which effect a person’s thoughts or actions. However, the exclusion criteria listed is that these hallucinations can not be a result of substance abuse or a Pervasive Developmental Disorder. These criteria, if used as guidelines, help doctors to accurately diagnose disorders.

The only detail I would add would be that a mental disorder only meets inclusion criteria if it presents distress, causes disability and significantly increases the risk of death, pain or loss of freedom. As for exclusion criteria, if an action is culturally accepted it is not a mental disorder even if it is deviant.
34. Frontal Lobe vs. Limbic Brain Region Influence
The frontal lobe is a center for high-order cognitive processes also known as executive functions. It allows you to plan, figure out cause and effect, predict repercussion of actions, and tell right from wrong. As a system it develops until late into the 20's and influences a person's behavior through top-down modification (e.g., attention) as well as through its influence in personality. When comparing to the animal kingdom, there is great debate as to the extent of executive function capable in animals and how much frontal and prefrontal cortex they have.
On the other hand the limbic brain regions are evolutionarily very old and can be found throughout the animal kingdom. In fact, it is known as the reptilian part of the mammalian brain. It contains many functional areas that could be considered primal including emotions (e.g., amygdala), sense of smell (the only sense that doesn't pass through the thalamus), location and spatial memory (hippocampus), regulation of hormones (hypothalamus), and reward (dopamine). These areas interact and can be seen in how memories can be enhanced or remembered for longer if there is some emotion or smell related to the event. Many drugs of addiction work through the limbic system's reward and pleasure center to enhance a person's feeling of euphoria.
Both regions interact when considering a persons state of arousal or attention to the world around them. During the flight or flight response both areas work to make a person or animal aware of the world around them and to compare things that are observed to items from memory that they should be afraid of or not.
35. Single pathology view vs. Multicausal model of mental illness
With mental illness, the cause can be attributed to ONE of the following in the single pathology view:
Biomedical Model: the mental disorder is the result of biological malfunctions like gene defects or early brain damage or neurotransmitter imbalances. Fixing the damage or correcting the chemical imbalance is the tx, usually involving the correct drug from a medical practitioner
Psychodynamic Model: mental disorders are manifestations of psychological conflicts that generally originate in childhood experiences and impair full functioning in adulthood by distorting how one views himself and relate to others. Can be intra or interpersonal conflicts. Tx consists of attaining insights into the underlying conflicts and requires someone like a psychiatrist or psychologist who is specifically trained in psychodynamic therapy.
Learning Model: mental disorders result from maladaptive learning or coping strategies and ate best corrected with remedial learning. Tx consists of identifying the situations that elicit or reinforce the disorder symptoms, and then learning new responses to those situations. It is best conducted by psychologists skilled in applying the laws of classical and instrumental conditioning.
As a way to explain a mental disorder, each of these models is too limited on its own. This is because there could be various amounts of causes for a disorder. Thus, the multicausal model acknowledges the fact that the disorder is the result of an accumulation of factors and not just one underlying problem, so it involves the interaction of all three models, and thus a combination of tx. The multicausal view is more effective in treating the disorder, since it accommodates for several factors.
You might explicitly define "single pathology view" as "the viewpoint that there is only one single cause of mental illness", then list the 3 types of single pathology viewpoints. In this definition, you should also mention agonists and antagonists as the two types of drugs that correct mental illness according to this viewpoint.
In the definitions for your 3 single pathology views, you refer to "tx" when you are explaining treatment-- sorry but does "tx" mean treatment? I just wasn't sure, so you may want to clarify.
In your definition for the psychodynamic model, you might also mention that this viewpoint relates to Erikson's 8 Stages of man and Freud's stages because these create cognitive dissonance, which is what leads to the mental illness.
36. Reliability vs. Validity
Reliability is the degree of consistency with which a test measures a strait of attribute. Assuming that a trait of attribute remains constant, a perfectly reliably test of that measure will produce the same score each time it is given.

Validity is the extent to which a test measures what it is supposed to measure.

Example: Consider a bathroom scale. Imagine you step onto the scale and it shows you your weight to be 120 pounds. You step off the scale, surprised by the good news. But then you step on again and it reports a different weight, and if do it again and it reports a different weight from the previous two, then clearly the scale is not reliable. This example suggests that one way to assess reliability is by administering the test more than once. The correlation between scores would seem to be a good index of the test’s reliability.

Additional information: More crucial than reliability is a test’s validity. Imagine a psychology professor who assigns final grades in a course based largely on the way in which it is written. This assessment procedure might be reliable (assuming the professor is consistent in his assessment of handwriting), but it is sure not valid, since handwriting has nothing to do with mastery of a course’s content. This comes from chapter 14, the chapter on intelligence.
37. Mental illness Comorbidity vs. Single Pathology
When diseases are comorbid it means that they are occurring at the same time. When a patient is diagnosed with another disease that is separate from the primary illness, the patient is said to have comorbid illnesses. Comorbidity is the presence or effect of additional disorders to the primary disorder.

A single pathology is when one disease that the person has creates symptoms that may appear to be another disease. But at the end of the day there is one pathology that is affecting the patient and this one pathology is responsible for creating other illnesses or disorders.

When these mental illnesses are comorbid it is very important to determine whether one caused the other (which would mean it is from a single pathology) or whether the diseases are two separate entities that are affecting the patient. If the illnesses stem from a single pathology, then the treatment should focus on treating this single pathology because fixing this problem would fix the other problems. If the illnesses are not related then the treatment should be designed to alleviate the symptoms of both diseases.

An example of two comorbid diseases is ADHD and dyslexia in young children. The child may have both diseases and need treatment for both. Or perhaps the child was diagnosed with ADHD because her/his dyslexia caused him to become fidgety in class and not pay attention. This inability to pay attention may have resulted from the child’s inability to read with the class and lose interest. It could also be the other way around; the child’s ADHD prevented him from being able to pay attention while reading and learn the fundamentals. This would create symptoms that would appear to be dyslexia but was really just a symptom of the child’s ADHD. This situation illustrates the importance of determining whether the comorbid diseases have a cause and effect relationship or whether there presence is not related. By determining the relationship between the comorbid diseases, the treatment for the patient is also determined.

It is also worth noting that the rate of comorbidity is very common with medical conditions, for example cancer patients have a high tendency to have depression.

Comorbidity on the DSM Axis I is very common with Major Depression. Comorbidity has extremely high rates, as high as 60% on the Axis II Personality Disorders.
38. Subjective vs. Objective Testing
In general, subjective testing is based on an individual’s opinions or perceptions rather than on facts or evidence. Objective testing, on the other hand, is supposed to be free of any personal bias or personal opinion.

In Psychology, there are two main types of tests that represent objective and subjective tests. Structured Personality tests are objective tests that ask specific questions requiring specific answers. These objective personality tests typically showcase personality traits through simple Yes or No questions that contain little bias or ambiguity. The subjective portion of tests in psychology is referred to as Projective Personality tests. Projective tests examine your conscious and unconscious feelings, needs, and motives through tests that project these subjective opinions onto the scene.

Projective, or subjective, tests are very useful because they are difficult to fake or bias since there is no socially correct answer. These tests normally present the examinee with a relatively unstructured task such as making up a story to fit to a picture. This allows for the projective tests to bypass direct questioning found in objective tests to obtain access to deeper layers of personality in the unconsciousness. The disadvantages of projective tests, however, are that they are difficult to score since they are largely based on interpretation. Due to this, they have relatively low reliability and validity and the tests need to be combined with other tests to provide accurate assessments. Some of the most common subjective tests used today are the Rorschach inkblot technique and the Thematic Apperception Test (TAT).

Objective, or structured, personality tests are generally very reliable because they are easily administered and easily scored. There disadvantages, however, lie in the tests structure which limits the amount of unconscious evaluation. Furthermore, the questions on objective tests are straight forward and bias in that examinees will lean toward the socially desirable answer. The validity of objective tests also ranges based on these biases and on the context of the particular test. The two most common objective tests used today are the Minnesota Multiphasic Personality Inventory (MMPI) and the California Psychological Inventory (CPI).

The only small fact you can add is that projective tests are based on Freudian Psychology.
39. Axis I vs. Axis II Disorders
Axis 1 and Axis 2 are included in the DSM-IV (Diagnostic and Statistical Manual for Mental Disorders, fourth edition) classification system. The DSM-IV contains a description/profile of each mental disorder including its symptoms in each of its five axes.

Axis 1 contains Clinical Disorders or other conditions that may be a focus of clinical condition. Clinical Disorders are broken down even more in depth into 14 categories to help classify and identify the disorder: adjustment disorders, anxiety disorders, cognitive disorders, dissociative disorders, eating disorders, factitious disorders, impulse control disorders, mental disorders due to a general medical condition, mood disorders, schizophrenia and other psychotic disorders, sexual and gender identity disorders, sleep disorders, somatoform disorders, and substance-related disorders. Specifically, axis 1 includes depression, OCD, and bipolar disorder.

Axis 2 contains personality disorders and mental retardation. This means that a person's personality traits show behaviors causing discomfort, distress, and impair daily function of activities which show up early and a present for a long time. Often, the onset of the disorder can be traced all the way back to adolescence or early adulthood. Axis 2 disorders have an enduring pattern of inner experience and behavior that deviates markedly from the expectations of the individual’s culture which is shown through 2+ of the following areas: cognition, affectivity, interpersonal functioning, and impulse control. Specific disorders included in axis 2 are OCPD, paranoid personality disorder, and mental retardation.
40. Psychotropic Medication vs. Psychotherapy
Psychotropic medication – Medications that seem to control, or at least moderate, the manifestations of mental disorder.

Psychotherapy – As used here, a collective term for all forms of treatment that use psychological rather than somatic (physical) methods.

There is no right or wrong way to treat a mental illness because each individual patient will likely have their own unique manifestation of the problem. Elements such as the patient’s personality, the specific mental illness, the patient’s maturity, and the insurance and care available to the patient effect what route one might take in the treatment of his mental illness.

When used in tandem, psychotropic medication and psychotherapy often compliment each other. Proper medication will stabilize the patient’s internal environment and therapy will aid the patient in determining his problems and learning positive coping mechanisms. While many therapists opt for a combination of both medication and psychotherapy for their patients, some prefer one over the other. Therapists that follow a Biomedical Model would prescribe psychotropic medication for their patients because those given to this model think mental disorders are due to some kind of biological issue (e.g. a serotonin imbalance). Therapists that follow only the Psychodynamic (mental disorder caused by a major life event or transition [a situation of cognitive dissonance]) or Learning (mental disorder caused by poor coping strategies) Models use varying methods of psychotherapy to treat their patients.
41. Drug Efficacy vs. Drug Potency
Drug Efficacy is, basically, how well the drug works; if the drug clears up many or all of patients’ symptoms than the drug has high efficacy. However, if most of the symptoms persist than the efficacy of the drug is most likely low. In other words, the efficacy of the drug is how well it is able to provide relief of symptoms.

Drug Potency is the amount of drug needed for a pre-determined level of symptom relief. For example, if only a small dosage of the drug is needed to achieve the given level of symptom relief, then the drug has high potency. If the opposite is true, that is a large dosage is needed to achieve the given level of symptom relief, then the drug has low potency. If all is equal, then increasing the drug’s receptor affinity should increase the drug’s potency. However, this can not be assumed due to extraneous factors such as drug metabolism, crossing the blood-brain barrier, half-life, and other such aspects.

Efficacy and potency are not necessarily correlated. The efficacy of the drug does not always affect the potency.
42. Placebo/Psychosomatic Effect vs. Treatment Effect
Placebo and treatment effects, relevant to this course, are associated with pharmacotherapy and psychotherapy. In pharmacotherapy a placebo effect is induced when a patient is given a substance believed to be beneficial in resolving their particular condition. That is the patient believes that the substance whether in pill, liquid, repository, or other form will improve or resolve their particular affliction. In order to obtain a true placebo effect neither the patient or treating staff can be aware that the substance is in fact a placebo rather than an actual therapeutic compound. Only the individual or individuals managing the studying but with no involvement in the actual treatment can be aware of which patients are receiving placebos rather than the drug of interest in the study. As such the control, or placebo group, must take the same quantity of pills, have the same understanding of the treatment, and receive the placebo at the same time as the experimental group receiving the drug of interest in the study. The treating staff must also be the same for both groups in order to control for behavioral differences among staff that may lead to differing perceptions among participants if this rule is not followed. Quantitatively the placebo effect has been associated with 70% of placebo recipients reporting beneficial effects from a substance with no pharmacological value. These results, however, are based on small studies rather than ‘meta-analysis’ which examines large numbers of such studies. Meta-analysis studies of the placebo effect show a much smaller positive effect reported by individuals receiving placebos than those reported in small studies.

In pharmacotherapy the treatment effect is comprised of two contributing factors. Firstly a patient receiving a medication may believe that the medication will lead to positive health effects and thus will be led to report improvement to researches involved in a study. Secondly there is a ‘genuine’ medicinal effect, if it exists, that is attributable solely to the pharmacological benefits that a medication provides to the user of it. In general the medicinal effect is greater than the effect attributable to an individual’s belief that a medication will lead to an improvement in his condition. The latter factor of the treatment effect will not exist if a medication lacks therapeutic value.

In psychotherapy the placebo effect is associated with a patient’s expectation that in undertaking treatment positive results will occur. This is true throughout varying forms of psychotherapy in that a patient should be expected to believe that a recommended therapy will produce beneficial results.

In terms of treatment with psychotherapy the treatment effect results from coping strategies designed to improve a patient’s condition. As in the treatment effect associated with pharmacotherapy there are two contributing factors in that a patient may believe that psychotherapy will be effective and that given psychotherapy will be of therapeutic value. If neither factor exists there can be no treatment effect. In psychotherapy different forms of such therapies have been shown, through meta-analysis, to be equally effective. A common argument justifying equally successful results is that a patient will be assigned by a professional to a form of psychotherapy that best addresses their condition. This is termed ‘prescriptionism’ and is used to counter arguments that psychotherapy’s treatment effects are the result of an individual’s expectation of improvement not a therapeutic effect from such psychotherapy.
43. Dodo bird verdict vs. Applying the Appropriate Therapy
The Dodo bird verdict is the concept in psychotherapy that says that all of the major forms of psychotherapy are equally effective. It summarizes the comparison of the effectiveness of different forms of psychotherapy. It was coined the name “Dodo Bird Verdict” after the dodo bird in Alice in Wonderland who determined, “Everyone has won and all must have prizes.”

How can all psychotherapies share basically the same effectiveness?
1. The placebo effect says that patients get better because of their expectations to get better. Oftentimes patients swallow sugar pills while mistaking them for effective medicine. In the placebo effect, it is the patient’s belief that creates the healing.
2. Along with placebos, common factors contribute equally to the patient’s recovery. Some common factors include therapeutic efforts to calm emotional conditions, allow for self learning, and provide strong one-on-one relationships. These common factors are believed to produce similar results as placebos.

Many psychotherapists believe that applying the appropriate therapy is the most important issue involving the effectiveness of the treatment. In other words, although individual therapies might be equally effective on a general level, one particular therapy is oftentimes better in certain instances. For example, phobias and other related anxiety conditions are best believed to be treated by any of the behavior therapies whose goals are to eliminate fear. Panic disorder is another example of a condition that is best treated by one particular therapy. It is believed to respond well to cognitive therapy or reflective listening which is frequently followed by antidepressant medication to block panic attacks. The belief that appropriate therapies exist for individual cases is often called prescriptionism. Prescriptionism implies that specific therapies should be recommended for patients undergoing particular mental disorders just as specific medications are prescribed for individual physical illnesses.