Use LEFT and RIGHT arrow keys to navigate between flashcards;
Use UP and DOWN arrow keys to flip the card;
H to show hint;
A reads text to speech;
237 Cards in this Set
- Front
- Back
What is the scientific method?
|
1) The systematic and deliberate gathering and evaluating of empirical data 2) generating and testing hypotheses based upon general psychological knowledge and theory
3) in order to answer questions that are critical and answerable |
|
What are the steps of the problem solving cycle
|
1) Research Q
2) Methodology 3) Data Collection 4) Data Analysis 5) Conclusion |
|
Theory
|
Relates empirical findings to each other and other phenomena in a cohesive manner
|
|
Parsimony
|
directs us to select the simplest version or account of the data among the alternative that are available.
|
|
Moderator
|
variable that influences relationship of two variables of interest. When included it impacts or influences (strength or weakness) relationship between two variables
|
|
Mediator
|
process, mechanism or means through which a variable produces a particular outcome
|
|
Conceptual vs. Exact Replication
|
exact: repeating every detail of study
conceptual: testing the same or similar hypotheses, but using different measures or conditions |
|
Operational definitions
|
defining a concept on the basis of the specific operations used in the experiment
|
|
Interval scale
|
variables measured with numeric values with equal distance or space between each number measure amount and distance. e.g. temperature
|
|
Ratio scale
|
has all the properties of an interval variable, and also has a clear definition of 0.0. When the variable equals 0.0, there is none of that variable. Variables like height, weight...
|
|
inferential statistics
|
stats taken from sample that are used to make inferences about population from which sample was drawn
|
|
environmental/situational variables
|
Altering what is done to, with, or by the subjects. e.g. Treatment v. no treatment (to); self-observation/self-monitoring tasks (by)
|
|
instructional variables
|
(specific type of environmental or situational manipulation) variations in what the participants are told or are led to believe through verbal or written statements about the experiment and their participation; aimed at altering the participant's perception, expectation, or evaluation of a situation or condition
|
|
subject/individualistic variables
|
Attributes or characteristics of the individuals; may also refer to characteristics to which subjects may be exposed (e.g. environmental contexts, living conditions, trauma)
|
|
organismic variables
|
Attributes or characteristics of the individuals; may also refer to characteristics to which subjects may be exposed (e.g. environmental contexts, living conditions, trauma)
|
|
efficacy
|
The impact of treatment in the context of a well-controlled study (non-clinical settings). increases internal validity
|
|
effectiveness
|
The impact of treatment in the context of clinical work, not the laboratory. Increases external validity
|
|
4 reasons we need theories
|
1. Bring order to area where findings are diffused or multiple.
2. Explain the basis of change and unite diverse outcomes. 3. Directs our attention to which moderators to study. 4. Application and extension of knowledge to the world beyond the lab by understanding how something operates (its critical mechanism) |
|
What is needed to show causality?
|
1. The cause must precede the effect
2. The cause and effect must co-vary: When the cause is present the effect is present. When the cause is absent, the effect is absent 3. There must be no other plausible explanations for the effect other than the presumed cause |
|
internal validity
|
How confident can we be that observed changes were due to the intervention, not extraneous variables. Defending against sources of bias.
|
|
external validity
|
The validity of inferences about whether a casual relationship observed in a study would be generalizable to the real world.
|
|
statistical conclusion validity
|
the facets of quantitative evaluation that influences conclusion reached about cause and effect. pertains to adequate
1. Sampling procedures 2. Statistical tests 3. Reliable measurements |
|
construct validity
|
Experiments: Is the relationship due to the construct; explanation or interpretation or active ingredients? Measures: Does the measure assess the construct?
|
|
type I error
|
probability of rejecting a null hypotheses when it is true (too optimistic)
|
|
type II error
|
probability of accepting null hypotheses when it is false (too pessimistic)
|
|
alpha
|
probability of a type I error: rejecting a null hypotheses when it is true
|
|
beta
|
probability of a type II error: probability of accepting null hypotheses when it is false
|
|
power
|
probability of rejecting the null when it is false. likelihood of finding the difference between conditions when the conditions are truly different
|
|
p value
|
probability that a value as extreme or more extreme than the one observed could occur by chance alone (usually 0.05). Smaller p value = less likely to make type I error
|
|
effect size
|
Magnitude of effect or difference between two or more conditions. Computing difference between group means and dividing by pooled standard deviation. 0.2-0.3 is small. 0.8+ is large
|
|
Name the threats to internal validity
|
History
Maturation Instrumentation Testing Statistical regression Selection Bias Combo Diffusion & Imitation Special Treatment & Reactions of Controls Attrition |
|
History
|
threat to internal validity: Events, in addition to the independent variable, to which subjects are exposed that could influence their performance on the dependent variable
|
|
Maturation
|
threat to internal validity: changes as a result of ongoing, naturally occurring processes rather than independent variable. internal, physical or psychological, changes in subjects e.g. normal growth, general knowledge gains
|
|
Testing
|
threat to internal validity: taking test repeatedly increases scores: having control group where they take test with no intervention can help rule out
|
|
Instrumentation
|
threat to internal validity: changes in measurement devices, instructions, or methods of administration may affect the outcome of a study. Change in instruments or researchers. Includes observer drift.
|
|
Statistical regression
|
threat to internal validity: will travel back to mean with or without interventions. more plausible threat when subjects are selected because of their extreme scores.
|
|
Selection biases
|
threat to internal validity: systematic differences between groups before manipulations on basis of selection, when participant characteristics may interact with the IV. matching, random selection and random assignment help
|
|
Attrition
|
threat to internal validity: drop out across conditions at one or more time points that may be responsible for outcomes rather than the intervention itself.
|
|
Combination of selection x _____
|
threat to internal validity: when internal threats vary from group to group, when threat interacts differently with groups
|
|
Diffusion & Imitation of treatment
|
threat to internal validity: control group learn of the experimental arrangements or are accidentally exposed to the intervention
|
|
Special treatment or reaction of controls
|
threat to internal validity: control group's awareness of differences leads to compensatory rivalry or resentful demoralization
|
|
List the threats to external validity
|
sample characteristics
stimulus characteristics and settings reactivity to experimental arrangements reactivity to assessments multiple treatment novelty test sensitization timing |
|
Sample characteristics
|
a threat to external validity: Would the findings of one study apply to other people based on the study sample? Represents different ethnicities, genders, etc
|
|
Stimulus Characteristics and Settings
|
a threat to external validity: Aspects of the study may interact with the intervention and account for the effects
Setting, experimenters, interviewers, etc. can we generalize from lab to clinical settings? |
|
Reactivity to experimental arrangements
|
a threat to external validity: Were the results affected because the subjects knew they were participating in a study? e.g. social desirability bias
|
|
Multiple Treatment Interference
|
a threat to external validity: refers to drawing conclusions about a given treatment when it is evaluated in the context of other treatments. to minimize: use exclusive/inclusive criteria
|
|
Novelty affect
|
a threat to external validity: Maybe the only reason the intervention worked was that it was different or innovative
|
|
Reactivity to assessment
|
a threat to external validity: includes obtrusive measures and reactive measures.
|
|
Test sensitization
|
a threat to external validity: repeated testing may affect test results, pretest may affect posttest
|
|
Timing of measurement
|
a threat to external validity: Would the same results have been seen had measurements been taken at other times? e.g. time of day, follow-up points
|
|
List the threats to construct validity
|
Attention and contact with clients
Single operation and narrow stimulus sampling Experimenters expectations Cues of experimental situation |
|
Attention and contact with clients
|
threat to construct validity: Improvement (or effect) is merely due to attention given to subjects and not the intervention itself, placebo effect
|
|
Single operation and narrow stimulus sampling
|
threat to construct and external validity: e.g. using only one slide and measuring reactions, therapist competency rather than intervention produces effects. experimental manipulation or interventions include features that experimenter considers irrelevant that actually are relevant
|
|
Experimenters expectations
|
threat to construct validity: Expectancies, beliefs, and desire about results on part of experimenter influence how subjects perform
|
|
Cues of experimental situation
|
threat to construct validity: Influential information is conveyed to subjects prior to experimental manipulation
example: Rumors about the experiment or information received during recruitment |
|
List threats to statistical conclusion validity
|
Low statistical power
Variability in the procedures Subject heterogeneity Unreliability of measures Multiple comparisons and error rates |
|
Low statistical power
|
a threat to statistical conclusion validity: likelihood of concluding there is no difference when there is (type II error). often because the sample size is too small.
|
|
Variability in the procedures
|
a threat to statistical conclusion validity: Lack of consistency in the execution of the experimental procedures
|
|
Subject heterogeneity
|
a threat to statistical conclusion validity: The more diverse individuals in the sample, the greater the variability in the subjects' reactions to the measures and the intervention, making it less likely that one will be able to detect differences between conditions. to minimize: choose a more homogeneous sample or evaluating the effects of the relevant characteristics
|
|
Unreliability of measures
|
a threat to statistical conclusion validity: As measurement error increases reliability of the measurement tool decreases
Thus a greater portion of the subjects score will be due to random variation |
|
Multiple comparisons and error rates
|
a threat to statistical conclusion validity: deals with the number of statistical tests that will be completed. the more tests the more likely a chance difference will be found
|
|
Evaluation apprehension
|
Participants react to the testing situation, either improving or performing poorly based on anxiety about taking a test or participating in a study.
|
|
Loose protocol effect & recommendations
|
Source of bias: Failure of the investigator to specify critical details of the procedures that guide the experimenter's behavior
Recommendations: be explicit, automate procedures, anticipate Qs and give standard answers, train experimenters together, confederate subjects, give post-interviews, foster high standards. |
|
Experimenter expectancy effects & recommendations
|
Source of bias: experimenter's expectations affect subject's performance.
Recommendations: double blind & naive observers |
|
Demand characteristics & recommendations
|
Source of bias: cues of experimental situation/experimenters
Recommendations: pre-inquiry (have subjects imagine procedures), post-inquiry (did they know the hypothesis), blind simulators (pretend to be subjects and try to fool assessors) |
|
Subject roles & recommendations
|
Source of bias: good, negativistic, faithful, and apprehensive subject
Recommendations: reassure participants (data won't be used for anything else, anonymity), give rewards before beginning study, ensure participants are naive to purpose) |
|
Data recording/analysis
|
Source of bias: errors in recording/computing data, analyzing select portions of data, fabricating or "fudging" data.
|
|
File-drawer problem & recommendations
|
Source of bias: when only significant results get published, null results on the same subject are ignored
Recommendations: (revolution in psych research that supports publication of negative results or moves beyond hypothesis testing?) |
|
Subject selection bias & recommendations
|
Source of bias: your sample doesn't represent your population of interest (bias in selection, recruitment, screening...)
Recommendations: |
|
Experimenter characteristics & recommendations
|
Source of bias: experimenter's personal characteristics impact performance
Recommendations: specify experimenter characteristics in report (gender age etc), analyze data for experimenter characteristic effects |
|
Convenience sampling & recommendations
|
Source of bias: subjects were easily available, rather than chosen randomly
Recommendations: focus on recruiting those who wouldn't normally volunteer, screening requirements, consider impacts of volunteering, increase range of ppl you pull from |
|
Attrition & recommendations
|
Source of bias: losing subjects, especially systematically. can alter random composition of the groups (internal); limit generality to a special group (e.g., persistent subjects) or selection x intervention (external); and reduce sample size and power (statistical conclusion).
Recommendations: orientation, mailings/reminders during, appropriate incentives |
|
What are the four subject roles?
|
Good: tries to confirm hypothesis. Negativistic: tries to confirm null hypothesis. Faithful: tries to be unbiased. Apprehensive: worried about performance
|
|
random selection
|
each member of the population has an equal probability of being selected.
|
|
sampling frame bias
|
sampling frame is the list of sampling entities (people, households, organizations, etc) from which sample is drawn. biased sample: the distribution of characteristics differ systematically from that of the study population.
|
|
matching
|
Used to obtain equivalent groups when a characteristic is known to be correlated with the DV
Participants are matched based on certain characteristics, then randomly assigned |
|
Three key elements of a true experiment
|
1. IV is manipulated in different ways across groups
2. Random assignment 3. Experimental control exerted to keep non-IV variables constant |
|
quasi-experimental design
|
control groups still used, but intact groups do not permit random assignment
|
|
factorial design
|
simultaneous investigation of 2+ IVs or levels
|
|
multiple treatment design
|
the same participants perform in all of the conditions
|
|
counterbalanced design
|
attempt to balance the order of treatment across subjects
|
|
crossover design
|
half the subjects get treatment A first, then treatment B, while the other half get treatment B first, then treatment A
|
|
order effect
|
The point in time (earlier or later in the sequence) in which treatment occurred may be responsible for the results
|
|
sequence effect
|
The arrangement of treatments contributes to their effects
sometimes referred to as multiple-treatment interference & carryover effects. Eg A works well only after B |
|
treatment differentiation
|
treatments in a study of two or more treatments were distinct ALONG PREDICTED DIMENSIONS
|
|
treatment integrity
|
ensuring that treatments are administered as intended
|
|
What are the advantages to using pretests?
|
Allows for matching (based on pretest)
May evaluate the matched variable Allows for more powerful statistical tests (less error) Permits assessment of clinically significant change Allows evaluation of attrition |
|
What are the strengths of pretest-posttest control group designs?
|
controls for the threats to internal validity (i.e., history, selection bias, etc.). also see advantages to pretests:
Allows for matching (based on pretest) May evaluate the matched variable Allows for more powerful statistical tests (less error) Permits assessment of clinically significant change Allows evaluation of attrition |
|
What are the weaknesses of pretest-posttest control group designs?
|
pretest sensitization for external validity ("testing" for internal validity)
|
|
What is the key goal or purpose of a Solomon four-group design?
|
to assess the effect of pretesting on the effects obtained with an intervention. one control group and one experimental group receive no pretest.
|
|
When would a Latin square be used instead of a simple crossover design?
|
randomly assigning participants to a pre-determined set of treatment orders helps ensure that a subset of treatment order possibilities are each presented to a roughly equivalent number of participants. e.g. randomly assign equal numbers to A-B-C, A-C-B, C-B-A, C-A-B, B-A-C and B-C-A
|
|
What should be considered when choosing a comparison or control group?
|
1. Interests of the investigator
Choose a fair group, not one that will create bias in favor of hypothesis 2. Previous research findings e.g. evidence may exist that indicates that a no-treatment control is not necessary 3. Practical and ethical constraints Getting/keeping subjects, withholding treatment, deception, etc. |
|
What are the potential problems associated with a no-treatment control group?
|
Ethical issues- withholding tx
Practical problems: explanation of rationale attrition |
|
What are the strengths of a waiting-list control group?
|
Not as difficult to get subjects
Effect of tx is replicated Between-group and within-group comparisons |
|
What are the weaknesses of a waiting-list control group?
|
May not assess long term impact of tx (control group is no longer a control group during later follow-ups)
Depending on the situation, may still be ethically questionable |
|
What is the key purpose and use of a nonspecific-treatment group?
|
- addresses threats to internal validity as well as threats to construct validity of experiment.
- placebo effects - finding out what facet of the intervention led to change - groups allow us to rule out plausible threats but do not point to the specific reason for change |
|
What are the four advantages to using a routine or standard treatment comparison group?
|
l. Meets ethical standards - all subjects receive active treatment
2. Attrition less likely 3. Controls for nonspecific factors 4. Clinicians more satisfied as participants and consumers bc the question is one that is more clinically relevant and the study more closely resembles clinical work by including a standard treatment (is the new treatment really better?). |
|
Be able to describe the key differences between matching and a yoked control group.
|
Matching happens before the experiment, yoking happens afterwards (other differences...?)
|
|
systematic sampling
|
soliciting subjects simply because they are available; the researcher starts at a random point and selects every nth subject in the sampling frame. there is a danger of order bias if the sampling frame lists subjects in a pattern, but if list is randomly ordered, it is equivalent to random sampling
|
|
availability/convenience/haphazard sampling
|
sample whoever's easily accessible
|
|
quota sampling
|
convenience sampling, but with the constraint that proportionality by strata be preserved. e.g. requires a certain number of white male Protestants, a certain number of Hispanic female Catholics, etc, to improve the representativeness
|
|
snowball sampling
|
obtain referred subjects from the first few subjects, then additional referred subjects from the second set, and so on
|
|
Purposive Sampling
|
employs subjective judgment to identify individuals from the population in which we are interested; technique often used in qualitative research methods (e.g., focus groups, case studies)
|
|
Expert Sampling
|
researcher interviews a panel of individuals known to be experts in a field
|
|
mismatching
|
A procedure in which an effort is made to equalize groups that may be drawn from different samples (like two different clinics). Careful: the sample might be equal on a pretest measure of interest but regress toward different means upon retesting. Changes due to statistical regression might be misinterpreted as an effect due to the experimental manipulation.
|
|
no-treatment control group
|
assessed but receives no intervention; by including this group in the design, the effects of history and maturation are directly controlled
|
|
No-contact control group
|
subjects do not know or realize they are participating; attempts to diminish reactivity of participation; violation of informed consent; special permission and counsel required
|
|
Nonsepecific-treatment
|
"attention-placebo" control group- addresses threats to internal validity as well as threats to construct validity of experiments; allow us to rule out plausible threats but do not point to the specific reason for change
|
|
Routine or standard treatment
|
comparing a new tx with the standard one when it may not be ethically defensible/feasible to give no tx
|
|
Yoked control group
|
equalizes the groups on a particular variable that might systematically vary (e.g. number of sessions attended)
|
|
Nonrandomly assigned or nonequivalent control group
|
subjects who were not part of the original subject pool and not randomly assigned to tx; e.g. with intact groups/quasi-experiments. helps rule out validity threats (history etc) but weak for comparison
|
|
What are the 2 key requirements for determining clinical significant change?
|
1. Pre to post-difference scores exceed RCI and
2. Post-treatment scores falls within the range of normative values |
|
What is the formula in words for the numerator of the Reliable Change formula?
|
RCI Numerator: (pre-treatment score)- (post-treatment score)=the difference in the presentation of the construct being measured after the intervention has been implemented. *A large difference in pre and post scores will yield a larger RCI.
|
|
What is the formula in words for the denominator of the Reliable Change formula?
|
RCI Denominator: Standard Error of Difference=Calculated by the standard deviation and reliability coefficients of normative data; estimation of the range of chance variation in scores.*A smaller denominator (Lower probability of chance variation in scores) will yield a larger RCI
|
|
Name 3 questions related to cut off methods and scores
|
1. Does the level of functioning at post-test fall outside the range of the dysfunctional population, where range is defined as extending to 2 sd's above (towards functionality) the mean for that population?
2. Does the level of functioning at post-test fall within the range of the functional (or normal) population, where range is defined as beginning at 2 sd's below the mean for the normal population? 3. Is the post-test score statistically more likely to be drawn from the functional than the dysfunctional distribution? |
|
When would you use cutoff method A?
|
When normative samples are not available.
|
|
If the non-clinical and clinical populations overlap, which cutoff method is preferred?
|
Method C (Is the post-test score statistically more likely to be drawn from the functional than the dysfunctional distribution) assessing the statistical likelihood of clients' scores falling in the functional/dysfunctional range) is preferred when the functional and dysfunctional populations overlap.
|
|
Which cut-off method is the most stringent?
|
Method A (with overlap)
|
|
Which cut-off method is the most lenient?
|
Method B
|
|
Which cut off method is typically in-between the most stringent and lenient cut off methods?
|
Method C
|
|
Which cut off method is preferred if the populations DO NOT overlap?
|
Method B: (Assessing the post-test scores in relation to the functional/normal population range) is preferred when there is no overlap.
|
|
What are the 4 critical issues that may present a problem when determining clinical significance?
|
1. There may not be 2 distinct distributions for the functional and dysfunctional
2. Defining a normative sample (and accounting for diversity factors) 3. Symptom reduction may not reflect decreased impairment or increased functionality 4. Reliability of measures of clinical significance |
|
What are the 2 key requirements for determining clinical significant change?
|
1: RCI, exceeds or doesn't exceed +/- 1.96
2: Crosses the chosen cutoff point |
|
Put into words the numerator of the RCI formula.
|
The difference score between pre- and posttest scores for an individual
|
|
Put into words the denominator of the RCI formula.
|
The expected spread of the distribution of scores if no change had occurred while taking into account measurement error.
|
|
Describe how to compute a cutoff score using Method a. (When standard deviations are equal)
|
Two standard deviations from the mean of the DYSFUNCTIONAL population (in the direction of Functionality)
|
|
Describe how to compute a cutoff score using Method b. (When standard deviations are equal)
|
Two standard deviations from the mean of the FUNCTIONAL population (in the direction of Dysfunctionality)
|
|
Describe how to compute a cutoff score using Method c. (When standard deviations are equal)
|
Halfway between the means of the functional and dysfunctional populations.
|
|
When would you use cutoff method a?
|
When the data for the normative population is not known.
|
|
Nonclinical and Clinical Populations overlap: Which cutoff method is preferred?
|
Method c. Because the other two can be so variable.
|
|
Nonclinical and Clinical Populations overlap: Which cutoff method is the most stringent?
|
Method a.
|
|
Nonclinical and Clinical Populations overlap: Which cutoff method is the most lenient?
|
Method b.
|
|
Nonclinical and Clinical Populations DO NOT overlap: Which method is preferred?
|
b is preferred except for populations where nonclinical information may not be good for entering. Method c would be good to split difference between over- and under-estimating change.
|
|
Nonclinical and Clinical Populations DO NOT overlap: Which method is most stringent?
|
Method b
|
|
Nonclinical and Clinical Populations DO NOT overlap: Which method is most lenient?
|
Method a
|
|
What is necessary to show a client has Improved?
|
An RCI greater than 1.96, but a post-test score that does not cross the chosen cutoff score.
|
|
What is necessary to show a client has Recovered?
|
An RCI greater than 1.96, and a post-test score that crosses the chosen cutoff score.
|
|
What is necessary to show a client has Deteriorated?
|
An RCI score that exceeds 1.96 in the direction of dysfunctionality.
|
|
What are the critical issues that may present a problem when determining clinical significance?
|
1:Lack of adequate normative information about outcome measures
2: Lack of psychometric information about measures 3: Outcome measurement reliability 4: Outcome measurement validity 5: Statistical regression effects |
|
Checking on Manipulation
|
Independently assessing the independent variable and its effects on the subjects; assessed whether the conditions of interest to the investigator were altered or provided to the subjects. E.g. questionnaire ("What info did you learn from the experimenter") Kazdin 215
|
|
Intent-to-Treat Analysis
|
Analyzing the results, including data from ALL subjects originally assigned to groups and conditions (using last data that subjects provided before dropping out). Benefit: preserves random composition of groups to avoid selection bias.
|
|
Treatment Integrity
|
The fidelity with which a particular treatment is rendered in an investigation
|
|
3 Types of Manipulation
|
1: Variations of Information
2: Variations in Subject Behavior and Experience 3: Variation of Intervention Conditions |
|
Variations of Information
|
Different information given to subjects across experimental conditions
Ex: Varying info in instructions for an experiment; Fear, Positive, None |
|
Variations in Subject Behavior and Experience
|
Conditions vary based on what the subjects do, what tasks they engage in, or what they experience
Ex. Groups may vary in whether or not they are assigned HW outside of sessions |
|
Variation of Intervention Conditions
|
Similar to variation in subject experience, but covers a broader range of procedures; often requires assessing treatment integrity/fidelity
Ex. Simplest case: one group receives intervention, one does not |
|
What two situations come from Utility of Checking the Manipulation?
|
1: No differences between groups
2:Keeping conditions distinct |
|
2 Data patterns that cause interpretative problems
|
1: Effect on Manipulation Check but No Effect on Dependent measure
2: No Effect on Manipulation Check but an Effect on Dependent Measure |
|
Case-Control Designs
|
Study characteristic of interest (IV) by forming groups of individuals who vary on that characteristic
|
|
Cohort Designs
|
The study of intact groups over time
|
|
Prospective Study
|
Study designed to evaluate events or experiences that will happen in the future
|
|
Retrospective Study
|
Study designed to evaluate events or experiences that happened in the past
|
|
Central Characteristics of Observational Research Designs
|
-NO Random Assignment
-Examination of variables that cannot be manipulated experimentally |
|
Cross-Sectional Case-Control Design
|
Cases and controls selected and assessed in relation to CURRENT characteristics
Hypothesis about how groups will differ, results are correlational |
|
Retrospective Case-Control Design
|
Goal is to draw inferences about some antecedent condition associated with outcome (DV)
Subjects elaborate on past; attempt to identify a time line between cause and effect |
|
Case-Control Design Strengths
|
Well suited to studying infrequent conditions
Feasible & Efficient in costs and resources Lower Attrition Ability to study Moderators May get equivalent groups, rule out plausible threats Generate hypotheses about causal relationships |
|
Case-Control Design Weaknesses
|
If time line not validated then cannot truly show cause preceding effect in time
Causal relations cannot be directly demonstrated Possible sources of sampling bias |
|
2 Key differences between Case-control and Cohort Designs
|
1: Cohort designs follow samples over time to identify factors leading to an outcome of interest
2: Cohort group is assessed before outcome has occurred; Case-Control selects groups based on outcome that has already occurred |
|
Key Feature of Accelerated Multicohort Longitudinal Design
|
Inclusion of cohorts who vary in age when they enter the study; requires less time than if one group was studied over specific time period
|
|
Cohort Design Strengths
|
Timeline b/n antecedents and outcome of interest may be established
May explore full range of possibilities Good for testing theories about risk, protective and causal factors, moderators, and mediators |
|
Cohort Design Weaknesses
|
May take considerable time to complete
May be very costly Susceptible to higher levels of attrition Cohort effects possible, making results specific to groups studied Outcome of interest may have a low base rate in populations |
|
Grounded Theory
|
The development of theory from careful and intensive observation and analysis of the phenomenon of interest
|
|
Triangulation
|
Using multiple procedures, sources, or perspectives to converge to support the conclusions
|
|
Confirmability
|
The extent to which an independent reviewer could conduct a formal audit and re-evaluation of the procedures and generate the same findings
|
|
Trustworthiness
|
The extent to which the data have transferability, dependability, and confirmability
|
|
Transferability
|
Whether the data are limited to a particular context and is evaluated by looking at any special characteristics (unrepresentativeness) of the sample
|
|
5 Types of Validity that convey nature of qualitative research methods
|
Descriptive
Interpretive Theoretical Internal External |
|
Descriptive Validity
|
Accuracy of the info reported by the investigator
|
|
Interpretive Validity
|
Accuracy of the interpretation of the "meaning"
|
|
Theoretical Validity
|
Does the explanation of the phenomena fit the data?
|
|
Internal Validity
|
Are there other sources (variables) that could explain the results?
|
|
External Validity
|
Are the findings generalizable?
|
|
Formal Guidelines and Procedures of Qualitative Research
|
-Collecting Info
-Guarding against bias and artifact -Making Interpretations -Checking on Interpretations and Investigator -Ensuring internal consistency and confirmability of findings -Seeking triangulation of methods and apporaches -Encouraging replication, both within particular data set and with additional data |
|
Common Rule
|
Federal policy for the protection of human subjects. Requirements for:
-Assuring compliance by research institutions -Researchers obtaining&documenting informed consent -IRB membership, function, operations, review of research, and record keeping Additional protections for certain vulnerable research subjects (e.g. pregnant women, prisoners, children) |
|
Anonymity
|
Ensuring that the identity and performance of the subjects in an investigation are not revealed and cannot be identified
|
|
Confidentiality
|
Not disclosing information obtained from a subject in an experiment without the awareness and consent of the participant
|
|
Debriefing
|
Providing a description of the experiment and its purposes; can resolve potential harmful effects of deception
|
|
Conflict of Interest
|
Any situation in which an investigator may have an interest or obligation that can bias a research project
|
|
3 Basic Ethical Principles (Belmont Report)
|
Respect
Beneficence Justice |
|
Respect
|
Truly informed consent
|
|
Beneficence
|
Justifiable rick exposure; potential risks of a study are minimal or justified by potential benefits
|
|
Justice
|
Societal Concern and Scientific Merit; selection of research participants is fair
|
|
What are the 2 key requirements for determining clinical significant change?
|
1: RCI, exceeds or doesn't exceed +/- 1.96
2: Crosses the chosen cutoff point |
|
Put into words the numerator of the RCI formula.
|
The difference score between pre- and posttest scores for an individual
|
|
Put into words the denominator of the RCI formula.
|
The expected spread of the distribution of scores if no change had occurred while taking into account measurement error.
|
|
Describe how to compute a cutoff score using Method a. (When standard deviations are equal)
|
Two standard deviations from the mean of the DYSFUNCTIONAL population (in the direction of Functionality)
|
|
Describe how to compute a cutoff score using Method b. (When standard deviations are equal)
|
Two standard deviations from the mean of the FUNCTIONAL population (in the direction of Dysfunctionality)
|
|
Describe how to compute a cutoff score using Method c. (When standard deviations are equal)
|
Halfway between the means of the functional and dysfunctional populations.
|
|
When would you use cutoff method a?
|
When the data for the normative population is not known.
|
|
Nonclinical and Clinical Populations overlap: Which cutoff method is preferred?
|
Method c. Because the other two can be so variable.
|
|
Nonclinical and Clinical Populations overlap: Which cutoff method is the most stringent?
|
Method a.
|
|
Nonclinical and Clinical Populations overlap: Which cutoff method is the most lenient?
|
Method b.
|
|
Nonclinical and Clinical Populations DO NOT overlap: Which method is preferred?
|
b is preferred except for populations where nonclinical information may not be good for entering. Method c would be good to split difference between over- and under-estimating change.
|
|
Nonclinical and Clinical Populations DO NOT overlap: Which method is most stringent?
|
Method b
|
|
Nonclinical and Clinical Populations DO NOT overlap: Which method is most lenient?
|
Method a
|
|
What is necessary to show a client has Improved?
|
An RCI greater than 1.96, but a post-test score that does not cross the chosen cutoff score.
|
|
What is necessary to show a client has Recovered?
|
An RCI greater than 1.96, and a post-test score that crosses the chosen cutoff score.
|
|
What is necessary to show a client has Deteriorated?
|
An RCI score that exceeds 1.96 in the direction of dysfunctionality.
|
|
What are the critical issues that may present a problem when determining clinical significance?
|
1:Lack of adequate normative information about outcome measures
2: Lack of psychometric information about measures 3: Outcome measurement reliability 4: Outcome measurement validity 5: Statistical regression effects |
|
z score
|
closest approximation to a standard unit of measurement. useful for comparing different variables/different instrumentation. Z = (raw score-mean)/SD
|
|
coefficient of determination
|
Represents the percentage of variance in the criterion that can be explained by variance associated with the predictor
Indicate the test's ability to account for individual performance differences. Squared validity coefficient (validity coefficient is the correlation coefficient between test and criterion variable). (lecture 7, slide 55) |
|
What is the difference between concurrent validity and predictive validity strategies?
|
Concurrent validity = predictor and criterion data are collected at the same time. Predictive validity = criterion data are collected at a future time. (lecture 7, slide 56)
|
|
The Belmont Report established these three basic ethical principles:
|
Respect for persons
Beneficence Justice |
|
Observational research designs are characterized by...
|
Lack of IV manipulation, study of intact groups
|
|
Consider an experiment in which subjects are tested on both an auditory reaction time task and a visual reaction time task. Half of the subjects were given the visual first and half the auditory.
|
What is a counterbalanced or a crossover design?
|
|
Single subject case designs are most similar to this type of experimental design...
|
What is a within-subjects, repeated measures, or multiple treatments design?
|
|
In multiple treatment designs the possibility exists that the point in time in which a treatment occurred, rather than the specific treatment, might be responsible for changes in the DV.
|
What are order effects?
|
|
What are two practical problems associated with a no-treatment control group?
|
The explanation/rationale to subjects and an increase of attrition rates
|
|
How might the threat of selection bias be minimized?
|
Random assignment
Matching Use of analysis of covariance (ANOVA) |
|
The two types of criterion-related validity strategies and the difference between them:
|
Concurrent validity versus predictive validity
difference is the time of collection of criterion validity |
|
What are the two major case-control designs?
|
Cross-sectional and Retrospective
|
|
What are the 2 conditions of a true experiment?
|
In addition to random assignment to conditions, MANIPULATION OF IV is manipulated in different ways across groups, and experimental control is exerted by trying to keep VARIABLES OUTSIDE OF IV constant
|
|
What are three limitations of the case study?
|
1.No hypothesis test and many alternative explanations exist
2.Open to judgment and interpretation 3.Inferences based on reports of the clients and their reconstruction of the past (retrospective in nature) 4.Cannot generalize findings confidently because dealing with one person |
|
What is the key goal or purpose of a Solomon four-group design?
|
To assess pretest effect on effects obtained through intervention. (Pretest posttest control group design combined with basic posttest control group design).
|
|
What are two of the four methods (or criteria) for quantitative data evaluation in a single-case design?
|
1.Changes in means - shifts in average across populations
2.Latency of the change - rate of change after baseline 3.Changes in level 4.Change in slope |
|
What are two weaknesses of the case-control design?
|
1.Inability to directly demonstrate causal relationship
2.Sources of sampling bias |
|
What is the denominator of the reliable change score described as?
|
Expected spread of distribution if no change had occurred while taking into account measurement error; within this is the reliability estimate of the measure.
|
|
When a validity coefficient is squared we get the coefficient of determination, which represents what?
|
The percentage of variance in the criterion variable that can be explained by variance associated with the predictor.
|
|
What is the key purpose of the yoked control group?
|
To establish group equivalence on pre-existing variables
|
|
What are the two key differences that help to distinguish a cohort design from a case-control design?
|
1.Cohort designs follow samples over time to identify factors leading to an outcome
2. The group is assessed on outcomes before it occurs |
|
What are the advantages of a pretest?
|
1.Permits assessment of clinically significant change
2.Allows for matching 3.Evaluate the matched variable 4.More powerful statistical testing 5.Allows evaluation of attrition |
|
What are two situations that may be examined by assessing the utility of the manipulation check?
|
1.No differences between groups
2.Keeping the conditions the same |
|
What are 4 advantages of using a routine or standard treatment comparison group?
|
1.Meets ethical standards
2.Attrition is less likely 3.Controls for nonspecific factors of therapy 4.Clinicians are more satisfied as study personnel & consumers |
|
What are the four key characteristics of single-case experimental designs?
|
1.Continuous assessment
2.Baseline assessment 3.Use of different phases 4.Stability of performance |
|
When discussing the validity of qualitative research what is meant by triangulation?
|
Triangulation is the extent to which data from separate sources converge to support the conclusions. (multiple sources, procedures or perspectives)
|
|
What is key feature of accelerated multi-cohort longitudinal design?
|
Inclusion of cohorts who VARY IN AGE when they begin/enter the study, multiple groups of different aged participants
Different ages groups at start for different phases |
|
What are the three types of multiple baseline designs?
|
1.Across Behaviors
2.Across Individuals 3.Across Situations |
|
What should be considered when choosing a comparison or control group? ( 1 of 3)
|
1.Interests of investigator
2.Previous research finding 3.Practical and ethical constraints/considerations |
|
Efficacy studies place an emphasis on __________ while effectiveness studies place an emphasis on __________
|
Efficacy-experimental controls for INTERNAL VALIDITY
Effectiveness-generalizability of findings EXTERNAL VALIDITY |
|
What arethe two types of effects that may be tested in a factorial design?
|
1.Main effects
2.Interaction effects |
|
This design consists of an outcome measured at baseline, then again during the intervention phase, and finally again during a phase that involves return to an original baseline...
|
What is an ABA design?
|
|
One strategy for assuring group equivalence regarding a specific demographic variable that is typically used when the variable is thought to correlate with the DV or interact with the IV in a significantly error producing manner...
|
What is Matching?
|
|
Please describe why the Solomon Four-group design is considered a factorial design
|
2 or more factors with 2 or more levels
The design can be considered as a combination of the pretest-postest control group design in which pretest (provided vs not provided) and the experimental intervention (Tx vs no Tx) are combined |
|
z score
|
The closest approximation to a standard unit of measurement
|
|
Coefficient of Determination
|
-Validity Coefficient squared
-Percentage of variance in the criterion that can be explained by variance associated with the predictor -Indicates the test's ability to account for individual performance differences |
|
Concurrent validity strategy
|
Predictor and criterion data are collected AT THE SAME TIME
|
|
Predictive validity strategy
|
Criterion data are collected at a FUTURE TIME
|