• Shuffle
    Toggle On
    Toggle Off
  • Alphabetize
    Toggle On
    Toggle Off
  • Front First
    Toggle On
    Toggle Off
  • Both Sides
    Toggle On
    Toggle Off
  • Read
    Toggle On
    Toggle Off
Reading...
Front

Card Range To Study

through

image

Play button

image

Play button

image

Progress

1/38

Click to flip

Use LEFT and RIGHT arrow keys to navigate between flashcards;

Use UP and DOWN arrow keys to flip the card;

H to show hint;

A reads text to speech;

38 Cards in this Set

  • Front
  • Back
  • 3rd side (hint)

Low Statistical Power

An insufficiently powered experiment may incorrectly conclude that the relationship between treatment and outcome is not significant.
"From a 10 person survey I conclude that males are better than women with a 75% confidence interval"
Violated Assumptions of Statistical Tests
Violations of statistical test assumptions can lead to either overestimating or underestimating the size and significance of an effect.
"For instance, many statistical analyses assume that the data are distributed normally -- that the population from which they are drawn would be distributed according to a "normal" or "bell-shaped" curve. If that assumption is not true for your data and you use that statistical test, you are likely to get an incorrect estimate of the true relationship" http://www.socialresearchmethods.net/kb/concthre.php
Fishing and Error Rate Problem
Repeated tests for significant relationships, if uncorrected for the number of tests, can artifactually inflate statistical significance.
Running different models with different assumptions to get the conclusion you look for or doing the experiments with different samples until you find the one that gives you the relation you are looking for.

Unreliability of Measures

Measurement error weakens the relationship between two variables and strengthens or weakens the relationships among three or more variables.
Measure happiness by the number of smiles you see --. Is too weak. The operationalization of the construct is too weak. The way you ask about something happening doesn't consider other things. For example, you do a binary "smile = yes happy, no smile = not happy" Because there is no scale to measure levels of happiness, everything from gushing tears, a frown, a scowl and up to simply a person sitting waiting for the bus (who is genuinely happy but not smiling) is measured as "unhappy." Therefore, it strengthens the relationship between "no smile" and "not happy," even though that relation is false.
Restriction of Range
Reduced range on a variable usually weakens the relationship between it and another variable.
I treated only smart students in a program that increases IQ but only saw limited results. However, I may have gotten better results if I included normal and below average students. This is a ceiling effect **also due to a restriction in the scale for the group you are studying
Unreliability of Treatment Implementation
If a treatment that is intended to be implemented in a standardized manner is implemented only partially for some respondents, effects may be underestimated compared with full implementation.
As part of my experiment, students were supposed to receive cookies. However, I ran out of cookies and so not all students received cookies. 2. There are two classes where students receive treatment and should be taught the same things. One teacher keeps going off on tangents so the students are not getting the same treatment- problem!

Extraneous Variance in the Experimental Setting

Some features of an experimental setting may inflate error, making detection of an effect more difficult.
A study of lighting changes on a factory and how that impacted the workers, It turned out that what made the impact was not the light changes but the study interviews. Another example: everyone that takes the GRE should have the same experience so as not to have an advantage over another person, but there was construction and yelling going on outside of my window, so the variables in the setting created a different experience than others. They couldn't do anything about it but it affected my score.
Heterogeneity of Units (respondents)
Increased variability on the outcome variable within conditions increases error variance, making detection of a relationship more difficult.
"A study of electronic monitoring on prisoners showed no effect. However, when you looked just at high-risk prisoners there was a large effect seen.
Another example: you administer a treatment at two schools- a high school and an elementary school and compare the results. These two populations are so different that it's hard to see the results clearly. "
The units you are comparing are different

Inaccurate Effect Size Estimation

Some statistics systematically overestimate or underestimate the size of an effect.
Standard Deviation is a statistic, and the way is calculated assumes you are looking at a whole population instead of a sample so it gives you an Std dev that isn't the most accurate, that's why we use Sample St.Dev because it divides on n-1, it makes the sample smaller to correct for inflating. Is in the math.
Ambiguous Temporal Precedence
Lack of clarity about which variable occurred first may yield confusion about which variable is the cause and which is the effect.
if cancer was observed to occur before smoking, you would have failed to meet the requirement of proper time order (smoking must occur before the onset of cancer if you plan on arguing that smoking causes cancer). http://www.southalabama.edu/coe/bset/johnson/studyq/sq8.htm
Selection
Systematic differences over conditions in respondent characteristics that could also cause the observed effect.
i.e. when a group of students whose parents volunteered them get one intervention, and when a group of students whose parents did not volunteer them get another intervention
History
Events occurring concurrently with treatment could cause the observed effect.
Your implementing a program that improves PTSD in participants. However, the events of 9/11 occurred and participants' PTSD became worse.
Maturation
Naturally occurring changes over time could be confused with a treatment effect.
You are working with students and they are naturally starting to understand some things about the world- you can't say that learning is a result of your program because it's a natural part of their maturation- they are growing up!
Regression
When units are selected for their extreme scores, they will often have less extreme scores on other variables, an occurrence that can be confused with a treatment effect.
You choose the students with the lowest GPAs for the program- Of course they will improve! They are at the bottom so they can only improve from there.
Attrition
Loss of respondents to treatment or to measurement can produce artifactual effects if that loss is systematically correlated with conditions.
You are working with a school, but then another school opens up and takes half your subjects away. Or you are working with birds and some of them die- they are no longer available for treatment and participation in the study.
Testing
Exposure to a test can affect scores on subsequent exposures to that test, an occurrence that can be confused with a treatment effect.
Participants may remember the correct answers or may be conditioned to know that they are being tested. Repeatedly taking (the same or similar) intelligence tests usually leads to score gains, but instead of concluding that the underlying skills have changed for good, this threat to Internal Validity provides good rival hypotheses. Wikipedia....
Instrumentation
The nature of a measure may change over time or conditions in a way that could be confused with a treatment effect.

You measure reaction time by having participants press a button. Over time, the button becomes easier to push which artificially increases reaction time.

Additive and Interactive Effects of Threats to Internal Validity

The impact of a threat can be added to that of another threat or may depend on the level of another threat.

Inadequate Explication of Constructs
Failure to adequately explicate (analyze/develop in detail) a construct may lead to incorrect inferences about the relationship between operation and construct.
There is a boy that gives a black eye to another boy on accident, a boy that gives a black eye to another boy to get his candy, and a boy that threatens another boy that he will give him his black eye. If "aggression" is defined as both intent and a physical action, then only the second example would be considered "aggression".
Construct Confounding

Operations usually involve more than one construct, and failure to describe all constructs may result in incomplete construct inferences.

We are measuring people who have been under the poverty level for six months. However, in that population are African-Americans and victims of racial prejudice. These groups were not an intended construct of "unemployed", but were nonetheless confounded with it in the study operations.
Mono-Operation Bias

Any one operationalization of a construct both underrepresents the construct of interest and measures irrelevant constructs, complicating inference.

If I operationalize white people only as those people with white skin, I may incorrectly conclude that albinos are white people. To prevent this I should also include an operational definition such as parentage.
Mono-Method Bias

When all operationalizations use the same method (e.g., self-report), the method is part of the construct actually studied.

You are observing the effect of music played in the workplace on productivity. the music is only operationalized in one way: over the loud speaker. There are other ways: they could listen through headphones, etc.
Confounding Constructs with Levels of Constructs
Inferences about the constructs that best represent study operations may fail to describe the limited levels of the construct that were actually studied. THE LEVEL CONFUSED WITH THE IDEA
If a treatment is being compared with a control, and only a low level of treatment is used, it may not show an effect
Treatment Sensitive Factorial Structure

The structure of a measure may change as a result of treatment, change that may be hidden if the same scoring is always used.

Educational treatment has the respondents see things differently from those not so treated
Reactive Self-Reporting Changes

Self-reports can be affected by participant motivation to be in a treatment condition, motivation that can change after assignment is made.

"Applicants wanting treatment may make themselves look more needy or meritous

Reactivity to the Experimental Situation

Participant responses reflect not just treatments and measures but also participants' perceptions of the experimental situation, and those perceptions are part of the treatment construct actually tested.

"The response of subjects to questions may be shaped by their own expectations about how they should be answering"

Experimenter Expectancies
The experimenter can influence participant responses by conveying expectations about desirable responses, and those expectations are part of the treatment construct as actually tested.
Teacher's expectations about student performance becomes a self-fulfilling prophecy.
Novelty and Disruption Effects
Participants may respond unusually well to a novel innovation or unusually poorly to one that disrupts their routine, a response that must then be included as part of the treatment construct description.
like music in the workplace- some think it's cool and others are mad about it
Compensatory Equalization
When treatment provides desirable goods or services, administrators, staff, or constituents may provide compensatory goods or services to those not receiving treatment, and this action must then be included as part of the treatment construct description.
It's too unfair to give this group the special treatment, so others start giving speciail benefits that are similar to the treatment group to the control group. This ruins the experiment.
Compensatory Rivalry
Participants not receiving treatment may be motivated to show they can do as well as those receiving treatment, and this compensatory rivalry must then be included as part of the treatment construct description.
Half of the members of a football team are randomly chosen to take an expirimental drug to enhance performance, the half not chosen are motivated to work harder during practice to prove they can do just as well as those w/ the treatment.
Resentful Demoralization

Participants not receiving a desirable treatment my be so resentful or demoralized that they may respond more negatively than otherwise, and this resentful demoralization must then be included as part of the treatment construct description.

All the blue-eyed students in the class are forced to wear collars and are labeled as the losers of the class. They are put down all day by the teacher and other students (treatment). When they are given a quiz they score significantly lower because they are depressed and distracted by the mean treatment.
Treatment Diffusion
Participants may receive services from a condition to which they were not assigned, making construct descriptions of both conditions more difficult.
If researchers wanted to study the effects of an abortion law in MA, but the MA mothers went to New York for abortions it would obscure the study. Avoid similar influences, and limit interaction between control and group being studied
Interaction of the Causal Relationship with Units

An effect found with certain kinds of units might not hold if other kinds of units had been studied.

You studied Mexicans but the effects you saw with Mexicans are not seen on Africans
Interaction of the Causal Relationship Over Treatment Variations
An effect found with one treatment variation might not hold with other variations of that treatment, or when that treatment is combined with other treatments, or when only part of that treatment is used.
Food stamp programs only work when combined with professional training in alleviating poverty. Food stamp program by itself is not effective
Interaction of the Causal Relationship with Outcomes
An effect found on one kind of outcome observation may not hold if other outcome observations were used.

We studied increases in wealth but results were different depending on whether we looked at increases in income or increases in total assets

Interactions of the Causal Relationship with Settings
An effect found in one kind of setting may not hold if other kinds of settings were to be used.

Our special lab rats outperformed normal rats in the maze until the lights were dimmed and then all the rats performed the same

Context-Dependent Mediation

An explanatory mediator of a causal relationship in one context may not mediate in another context.

IS THIS CORRECT??? I have a guess: When a child smiles because you give him a candy- the explanatory mediator is that he was hungry. But later he smiles because you give him candy because his parents grounded him from sweets. Same cause and effect but different reason?. I think this is right! Basically Dr. Witesman said today that it means to control for the right variables, for this case the control variables vary --> why is the real reason for the child to smile?