Use LEFT and RIGHT arrow keys to navigate between flashcards;
Use UP and DOWN arrow keys to flip the card;
H to show hint;
A reads text to speech;
154 Cards in this Set
 Front
 Back
3 sources of evidence

1. Patient’s Unique Values and Circumstances
2. Best Research Evidence 3. Clinical Expertise 

5 steps for EBP

1. Ask a focused clinical question
2. Search for the best research evidence 3. Appraise the quality of the research evidence 4. Integrate the research evidence with info about the patient and clinical expertise 5. Reflect on the process to improve the future 

USC evidence pyramid (from best to worst)

Level 1a: Systematic reviews
Level 1b: Randomized clinical trials Level 2b: Cohort studies Level 3b: Case Control studies Level 4: Case series Level 4: Case studies Level 5: Narrative Reviews, Expert opinion, textbooks 

What makes a clinical question well built?

should be directly relevant to the problem at hand.
should be phrased to facilitate searching for a precise answer. 

Background questions characteristics

Ask for information about a condition.
2 essential components a question root (who, what, where, when, how, why) with a verb a condition or an aspect of a condition. Can cover a range of biologic, psychological, or sociological questions 

Foreground questions characteristics

Ask for specific information about managing patients with a condition.
3 essential components (PECO) 

3 parts of clinical question (PECO)

1. Participants (patient(s) you want to treat)
2. Exposure (an intervention if about therapy) and/or Comparison (there is always an alternative!  another therapy, nothing …) 3. Outcome (usually a disease or condition you want to prevent or manage) 

Major sources of knowledge/information

Tradition
Authority Trial and Error Logical Reasoning Scientific Method (Research) 

Hierarchy of Study Types

1. Evidencebased clinical guidelines
2. Systematic reviews and metaanalyses of randomized controlled trials. 3. Randomized controlled trials 4. Nonrandomized intervention studies 5. Observational studies 6. Qualitative studies 7. Case series, case reports 

Systematic reviews characteristics

literature reviews focused on a single question that try to identify, appraise, select and synthesize all high quality research evidence relevant to that question.
like scientific investigations in themselves, using preplanned methods and an assembly of original studies that meet their criteria as 'subjects' synthesize the results of an assembly of primary investigations using strategies that limit bias and random error 

Metaanalyses characteristics

The statistical analysis of a large collection of results from individual studies for the purpose of integrating the findings
Used to synthesize research findings and evaluate the effectiveness of treatments or accuracy of diagnostic tools 

Randomized Controlled/Clinical Trials characteristics

Randomized controlled clinical trials are experimental studies of cause and effect relationships between treatments and outcomes.
Treatment = independent variable Outcomes = dependent variable(s) 

3 ways to randomize for a study

Random selection of a sample from the sampling frame of the population
Random assignment of the sample to groups Random assignment of groups to treatment(s) and control conditions 

cohort studies characteristics

Observational studies in which a defined group of people (the cohort) is followed over time (also known as longitudinal studies)
outcomes of people in subsets of the cohort are compared, to examine people who were exposed or not exposed (or exposed at different levels) to a particular intervention or other factor of interest 

case control studies characteristics

Observational research comparing subjects who have a specific condition (the 'cases') with patients who do not have the condition but are otherwise similar (the 'controls').
No intervention is provided on the part of the researchers. People who already have the condition of interest are compared to a group of people without the condition. 

case series characteristics

Descriptive research
A group or series of case reports involving patients who were given similar treatment Reports of case series usually contain detailed information about the individual patients. Includes demographic information (for example, age, gender, ethnic origin) and information on diagnosis, treatment, response to treatment, and followup after treatment. A medical research study that tracks patients with a known exposure given similar treatment or examines their medical records for exposure and outcome (also known as a clinical series) 

case studies/reports characteristics

Describe practice
often focus on a patient or a group of patients, but they may also focus on facilities, education programs, or other definable units Topics often include patient/client management, ethical dilemmas, use of equipment or devices, etc. Case reports can’t prove effectiveness, test hypotheses, or prove cause and effect, and the outcomes that they report can’t be generalized to patients or entities. 

Narrative Reviews, Expert Opinion, Textbooks characteristics

Typically based on observation and experience, however sometimes based on “that’s how it’s always been done”
Different from the “Clinical Expertise” aspect of EBP, which relates to the expertise of the individual therapist striving to inform her/his practice through use of the evidence and the patient’s unique values and circumstances. 

scientific method

1. Observe an event.
2. Develop a hypothesis that makes a prediction. 3. Test the hypothesis. 4. Observe the result. 5. Revise the hypothesis. 6. Repeat as needed 

Scales / Levels of Data Measurement

Nominal qualitative
Ordinal > Interval > Ratio (quantitative, progressively more precise mathematically) 

nominal measures characteristics

A qualitative (or categorical) level of measurement; has no mathematical interpretation;
Variables whose values vary in kind or quality but not in amount. In terms of the variable “Occupation”, you can say that a lawyer is not equal to a therapist, but you cannot say that the “lawyer” is “more occupational” or “less occupational” than the therapist. 

Ordinal measures characteristics

At this level, you specify only the order of the cases, in “greater than” and “less than” distinctions.
Patient/client satisfaction is an ordinal measure. A rehabspecific example is Manual Muscle Testing (MMT) Grades Trace > Poor > Fair+ > Good  > Normal 

interval measures characteristics

At the interval level of measurement, numbers represent fixed measurement units but have no absolute zero point.
A frequent example is that of temperatures measured with the Fahrenheit scale. The temperature can definitely go below zero 

Ratio measures characteristics

Represents fixed measuring units with an absolute zero point. Zero, in this situation, means absolutely no amount of whatever the variable indicates.
On a ratio scale, 10 is two points higher than 8 and is also two times greater than 5. Ratio numbers can be added and subtracted, and because the numbers begin at an absolute zero point, they can also be multiplied and divided (so ratios can be formed between the numbers) ex: goniometry ROM 

Reliability definition

Extent to which a measure produces the same result under different conditions (e.g., consistency)
Reliability is a property of a measurement instrument… not of an experiment/study. 

Types of reliability

Testretest Reliability: Will the measure produce the same results when given on two different occasions? Typically expressed as a correlation coefficient (r)
Interrater Reliability: The extent to which two or more individuals agree. Intrarater Reliability: The degree of agreement among multiple repetitions of a diagnostic test performed by the same individual 

validity of measures definition

Extent to which the measure indicates what it is supposed to measure


face validity definition

Is the measure appropriate at face value? Does the measure ‘look like’ it is going to measure what it is supposed to measure?


Internal validity definition

Internal validity: Are the methods used in the study correct and are the results accurate?


External validity definition

External validity: Are the findings applicable beyond that particular study?


Questions to ask regarding external validity

Is the study purpose relevant to your clinical question?
Are the study’s inclusion and exclusion criteria clearly defined and would the patient in your clinical question qualify for the study? Are the intervention and comparison/control groups receiving an intervention related to your clinical question? Are the outcome measures used in the study relevant to your clinical question and are they conducted in a clinically realistic manner? Is the study population sufficiently similar to the patient in your clinical question to justify expectation that the patient would respond similarly to the population? 

types of central tendency

Mode (can be used with any type of data)
Median (interval and ratio data; frequently ordinal data; never nominal data) Mean (interval and ratio data; sometimes ordinal data; never nominal data) 

spread/variability types

Range (minimum – maximum)
Interquartile range (use with medians) Standard deviation (use with means) 

Definition of central tendency

A way of summarizing the data using a single value that is in some way representative of the entire data set
It is not always possible to follow the same procedure in producing a central representative value: this changes with the shape of the distribution 

mode characteristics

Most frequent value
Does not take into account exact scores Unaffected by extreme scores Not useful when there are several values that occur equally often in a set 

median characteristics

The values that falls exactly in the midpoint of a ranked distribution
Does not take into account exact scores Unaffected by extreme scores In a small set it can be unrepresentative 

mean characteristics

Takes into account all values
Easily distorted by extreme values the preferred measure of central tendency, except when: There are extreme scores or skewed distributions Non interval data Discrete variables 

spread/variability definition

Describes in an exact quantitative measure, how spread out/clustered together the measures are
Variability is usually defined in terms of distance 

The range characteristics

Simplest and most obvious way of describing spread / variability
Range = Highest  Lowest The range only takes into account the two extreme scores and ignores any values in between. To counter this there the distribution is divided into quarters (quartiles). Q1 = 25%, Q2 =50%, Q3 =75% 

deviation definition

A more sophisticated measure of variability is one that shows how scores cluster around the mean
Deviation is the distance of a score from the mean 

standard deviation characteristics

a number that measures how far away each number in a set of data is from their mean.
If the Standard Deviation is large, it means the numbers are spread out from their mean. If the Standard Deviation is small, it means the numbers are close to their mean. 

frequency distribution tables characteristics

Highest Score is placed at top
All observed scores are listed Gives information about distribution, variability, and centrality 

normal distribution characteristics

Bellshaped: Specific shape that can be defined by an equation
Symmetrical around the mid point, where the greatest frequency of scores occur In a normal distribution, the mean, median, and mode are the same value 

The "beauty" of the normal distribution

No matter what the mean and standard deviation are for your data set, the area within one standard deviation is about 68% of your data; the area within two standard deviations is about 95%; and the area within three standard deviations is about 99.7%.
aesthetically pleasing 

population definition

All the individuals of interest to the study


sample definition

The particular group of participants you are testing: Selected from the population


inferential statistics allow us to

estimate population characteristics from sample data


parameters definition

are mathematical characteristics of populations


statistics definition

are mathematical characteristics of samples
used to estimate parameters 

how can we ensure samples are representative?

Samples drawn according to the rule of EPSEM (Equal Probability of Selection Method): Every case in the population has the same chance of being selected for the sample.
EPSEM produces a simple random sample that is likely to be representative of the population 

central limit theorum

For any trait or variable, even those that are not normally distributed in the population, as sample size grows larger, the sampling distribution of sample means will become normal in shape


Independent variables characteristics

Intentionally manipulated
Controlled Vary at known rate Cause graphed on Xaxis 

Dependent variables characteristics

Intentionally left alone
Measured Vary at unknown rate Effect graphed on Yaxis 

Null hypothesis characteristics

Researchers make the initial assumption that manipulation of the independent variable will have NO EFFECT on the dependent variable (will be null).
Under the null hypothesis, any observed difference between the experimental and control groups is assumed to be due to chance (random error) unless proven otherwise! Researchers statistically test the null hypothesis, usually intending to reject it. You can only reject or fail to reject the null hypothesis 

goal of inferential statistics

test whether the results achieve “statistical significance.”
A statistically significant result is one that is very unlikely to be due to chance variations or sampling error. 

Type 1 error happens when

can only happen if you reject the null hypothesis and conclude there is a difference between the groups, when in fact, there is no difference. Hard to undo once published


Type 2 error happens when

Conclude that the difference between groups is so small that it doesn’t matter much or is very hard to detect.
Conclude that the difference is big enough to care about, but your sample size was just too small to tell you much. This is called an “uninformative null finding.” 

Type 1 error

Reject the null when there is really no difference
(reject null when null is true) if we said there was a real difference between the groups when it was just chance. 

Type 2 error

Failure to reject the null hypothesis when we should (we conclude there is no difference between groups, when in fact, there is a difference).


p value characteristics

probability of a difference occurring purely by chance.
A pvalue of 0.05 could be interpreted as: “Given the data we have, there is a 5% chance that there really is no difference.”  “Given that there really is no difference, the chance that we would get data as extreme as ours is only 5%.” 

alpha level characteristics

researcher sets the significance level, also called “alpha,” at the outset of the study.
determines how difficult it will be for the researchers to claim that their results are statistically significant. expressed as a probability, most commonly p<.05 Translation: There is a probability of less than 5in100 that the difference between groups is due to sampling error 

p value vs. a value

pvalue is the calculated probability of having committed a type I error
avalue is set value α is the amount that I'm willing to risk, and I just need to see what the actual risk (pvalue) is in comparison." 

if p value is less than a value then you

reject the null hypothesis... you conclude that there really is a difference between your groups.


statistical power definition

is the probability of rejecting the null hypothesis when the alternative hypothesis is true
As power increases, the chances of a Type II error decreases 

commonly accepted power level

.80 or higher


Power, or the Probability of Rejecting the Null Hypothesis Depends on

1. Sample sizelarger sample, higher power
2. difference in means you are looking for (effect size)easier to find big differences and harder to find small ones 3. variation of your measurementsIf individuals vary a great deal within group, it will take a larger sample size to see the differences between groups. 4. alpha level you require for the pvalue affects powerif you make the level 0.01 instead of 0.05 it will be harder to reject and power will go down. 

Pitfalls of large sample size

Clinically unimportant effects may be statistically significant if a study is large (and therefore, has a small standard error and extreme precision)
Pay attention to effect size and confidence intervals (spread of scores). 

confidence interval definition

Statement that population parameter will fall within interval for some specified probability (confidence level) for any sample
gives an estimated range of values which is likely to include the unknown population parameter, the estimated range being calculated from a given set of sample data. 

width of confidence interval gives us

gives us some idea about how uncertain we are about the unknown population parameter
wide interval may indicate that more data should be collected before anything very definite can be said about the parameter 

confidence level definition

confidence level is the probability value associated with a confidence interval.
often expressed as a percentage. For example, say, α = .05 = 5%, then the confidence level is equal to (10.05) = 0.95, i.e. a 95% confidence level. 

independent samples characteristics

Samples that have no effect on each other
Two samples: unpaired ttest More than two samples: analysis of variance (ANOVA) 

Dependent samples characteristics

Matched pairs
One group tested more than once Two samples: paired ttest More than two samples: repeated measures analysis of variance 

directional hypotheses characteristics

Specifies which of the group means the researcher expects to be greater than the other(s).
justified only when evidence exists to support the expectation. testing for a difference that goes in one direction (using onetailed tests/analyses) 

nondirectional hypotheses characteristics

Specifies only that the group means will differ, not which one is expected to be greater than the other.
appropriate when existing evidence does not support the superiority of one method over the other(s). 

correlation definition

Examines relationships between variables as opposed to comparison (how alike measures of the variables are)
Correlation coefficients ( 1 to 0 to + 1) quantify the strength and direction of association between two variables 

Regression characteristics

Used for prediction
Simple linear regression Multivariate regression 

tests for two independent samples

Intervalratio data: unpaired Ttest
Nominal: chi square 

tests for two dependent samples

intervalratio data: paired Ttest


> than 2 independent samples tests

intervalratio: analysis of variance


> than 2 dependent samples test

repeated measures analysis of variance


correlation tests

Intervalratio data: Pearson productmoment correlation coefficient (Pearson’s r)
Nominal data: Phi coefficient Point biserial correlation 

regression test

intervalratio data: regression analysis


Validity of parametric statistics depends on certain assumptions about the data
(when these assumptions not met must use nonparametric statistics) 
Sample randomly drawn from population has a normal distribution
Variances of samples being compared are roughly equal (test for homogeneity of variance). Data are interval or ratio scale 

Tap into Your Statistics Knowledge When Critically Appraising an Article

Validity of study in regard to your clinical question (population, age, diagnosis, etc.)
Sampling strategies Variables studied (do they link with your question?) Reliability of measures used in the study Statistical analysis (parametric? nonparametric?) Correct test(s) used? Power (sample size, effects size, variation of measurements, selected alpha level) Overall strengths and weaknesses of the study Other issues (Bias?, etc.) 

4 basic types of research designs

1. Experimental Research
2. Descriptive Research 3. Exploratory Research 4. Integrative Research 

experimental research characteristics

Identifies cause and effect relationships among variables
Uses comparison statistics to identify the relationships 

common types of experimental research (in order of of confidence in the validity of the outcomes and generalizability)

Randomized controlled trials (also called true experiments)
Nonrandomized trials (also called quasiexperimental designs) Singlesubject designs (results usually are presented in line graphs) 

gold standard of experimental research

Randomized Controlled Trial (RCT)


3 ways to randomize for a study

Random selection of a sample from the sampling frame of the population
Random assignment of the sample to groups Random assignment of groups to treatment(s) and control conditions 

descriptive research characteristics

Describes characteristics of groups of people or other phenomena
Often uses questionnaires, interviews, and/or direct observation Usually uses descriptive statistics; may use correlation statistics 

common types of descriptive research

Developmental research (investigates patterns of growth or change; includes natural history of a condition)
Normative research (establishes norms for specific variables) Qualitative research (uses various methods, such as interviews, review of documents, and/or observation to describe an experience from the point of view of the participants). Evaluation research (assesses programs or policies) 

exploratory research characteristics

Identifies relationships between variables
Usually uses correlation statistics 

common types of exploratory research

Cohort and case control studies
Methodological studies (e.g., reliability and validity studies) 

integrative research characteristics

Rigorously integrate findings from more than one study on the same topic.
Statistical methods vary If the studies are soundly designed/conducted, these form the ‘backbone’ of evidencebased practice. 

common types of integrative research

Evidencebased clinical guidelines
Metaanalyses Systematic reviews 

peer review definition

A process used to check the quality and importance of research studies. It aims to provide a wider check on the quality and interpretation of a study by having other experts in the field review the research and conclusions


Clinicians need to be able to answer three questions about the articles that they read

Are the design and results of the study valid?
What are the results (clinical bottom line)? Are the results relevant to my clinical question? 

Three explanations for an observed effect in a RCT

treatment had an effect
Chance variation between two groups Bias 

sources of potential bias

Natural history of a disease
Placebo effect Drop outs (intention to treat analysis) 

efficacy definition

focuses on whether an intervention works under ideal circumstances (such as in a laboratory setting) and looks at whether the intervention has any impact at all


effectiveness definition

focuses on whether a treatment works when used in the real world
An effectiveness trial is done after the intervention has been shown to have a positive effect in an efficacy trial. 

systematic error (bias) characteristics

any systematic process in the conduct of a study that causes a distortion from the truth in a predictable direction
captured in the validity of the inference selection bias information bias 

random error (chance) characteristics

occurs because we cannot study everyone (we must sample)
captured in the precision of the inference (e.g., confidence interval) will obscure a real difference reduced with larger sample sizes 

key difference between quantitative and qualitative research is

attempts to eliminate bias by quantitative researcher
explicit acknowledgement of bias by qualitative researchers 

selection bias definition

Bias that is caused by some kind of problem in the process of selecting subjects initially or  in a longitudinal study  in the process that determines which subjects drop out of the study


sources of selection bias

Inappropriate population studied
Inadequate participation Selection of most ‘accessible’ subjects or of volunteers 

The rule of EPSEM (Equal Probability of Selection Method)

Every case in the population has the same chance of being selected for the sample.


managing selection bias characteristics

Prevention and avoidance are key
Study design is critical If randomization is performed correctly, then selection bias on the “frontend” of the study is not possible 

prospective subject recruitment definition

Selecting subjects as they come along/present themselves to the researchers
Prospective recruitment is preferable to retrospective recruitment stronger internal validity 

retrospective subject recruitment characteristics

Potential subjects are identified and contacted by the researchers to participate in the study, may also involve review of patient charts to collect data


consecutive sample definition

A sample in which the subjects are chosen on a strict "first come, first chosen" basis. All individuals who are eligible should be included as they are seen (preferred type)


selective sample definition

A sample that is deliberately chosen by using a sampling plan that screens out subjects with certain characteristics and/or selects only subjects with other relevant characteristics


convenience sample definition

A sample where the patients are selected, in part or in whole, at the convenience of the researcher. The researcher makes no attempt, or only a limited attempt, to insure that this sample is an accurate representation of some larger group or population.


sources of information bias

Subject variation
Observer variation Deficiency of tools technical errors in measurement 

ways to minimize information bias

Specify criteria/methodology in advance
Analyze directly according to criteria/methodology Reduce numbers of observers Monitor performance of observers Use standardized tools for measurement 

natural history of a disease source of bias characteristics

The way in which a disease evolves over time from initial stage > more severe > some outcome (recovery, death, disability)
If an effective treatment exists for a disease process, it must ethically be treated, which interrupts/changes its natural history and the progression of stages and thus the outcome/research findings 

placebo effect definition

The measurable, observable, or felt improvement in health not attributable to actual treatment


3 types of blinding

Unblinded: Everyone knows treatment
Single Blinded: Researcher or patient does not know treatment Double Blinded: Neither researcher nor patient knows treatment 

Intention to Treat definition

An analyses that is conducted when people drop out or switch groups (you need at least 80% of your original subjects in their original groups to maintain validity).


intention to treat characteristics

Subjects’ data are analyzed according to the group to which originally assigned. (Preserves randomization)
Assume worse outcome if data not available (most conservative… other methods exist) You can analyze the data omitting people who drop out and then analyze with ITT, and compare the results. 

NNT (number needed to treat) characteristics

Statistical signficance versus clinical significance
Number of people who need to be treated to prevent one additional bad outcome (to reduce the expected number of cases of a defined endpoint by one). NNT for one person to benefit. No “good” or “bad” NNT… simply numbers to give an idea of what to expect from an intervention 

Ideal NNT

ideal NNT is 1, where everyone improves with the treatment and no one improves with the control.
NNT’s of 25 indicate effective therapies higher the NNT, the less effective is the treatment 

Relative risk characteristics

A relative risk of 1 means there is no difference in risk between the two groups
RR of < 1 means the event is less likely to occur in the experimental group than in the control group. (RR = .80 or 80%) RR of > 1 means the event is more likely to occur in the experimental group than in the control group. (RR = 1.25 or 125%) 

odds ratio of 1 means

Odds ratio of 1 implies that the event is equally likely in both groups


odds ratio vs. relative risk

Both compare the likelihood of an event between two groups.
Relative Risk: A Ratio of Probabilities (83%) Odds Ratios: A Ratio of Odds (5:1) usually comparable in magnitude when the disease studied is rare (eg, most cancers). Odds Ratio can overestimate and magnify risk, especially when the disease is more common (eg, hypertension)…Relative Risk should be used instead. 

Role of Single Study Experimental Designs (RCTs) for Evidencebased Practice

Examines pre versus posttreatment performance within a small sample
Reveals causal relationship between IV and DV Uses repeated and reliable measurement, within and betweensubject comparisons to control for major threats to internal validity Requires systematic replication to enhance external validity Basis for determining treatment efficacy, used to establish empiricallysupported treatments 

Age of an article is most important for

Clinical Practice Guidelines and Systematic Reviews
Cochrane Collaboration (2011) policy is that systematic reviews are updated every two years less important for RCTs, Cohort, CaseControl, Case Series, Case Reports, and Qualitative Studies 

Systematic reviews vs. literature reviews

Many journals publish literature reviews, which are much less extensive/intensive than a systematic review. They are collections of some articles on a particular topic


Literature reviews characteristics

can be useful, but they often lack breadth and depth
can be biased to reflect the authors' personal beliefs, are essentially expert opinion and as such, they are at the bottom of the evidence pyramid (Level 5, Grade D recommendation 

A metaanalysis similar to a simple crosssectional study, in which

in which the subjects are individual studies rather than individual people


Review of literature is a metaanalytic review only if it includes:

quantitative estimation of the magnitude of the effect and its uncertainty (confidence limits).


Why is metaanalysis important?

aim of research is to get the magnitude of an effect with adequate precision.
Each study produces a different estimate of the magnitude combines the effects from all studies to give an overall mean effect and other important statistics. 

4 steps of metaanalysis

1. Identify your studies
2. Determine eligibility of studies Inclusion: which ones to keep Exclusion: which ones to throw out 3. Abstract Data from the studies 4. Analyze data in the studies statistically 

Examples of Inclusion Criteria for a Systematic Review

Published in a peerreviewed journal?
Experienced researchers? Research funded by impartial agency? Study performed by impartial researchers? Subjects selected randomly from a population? Subjects assigned randomly to treatments? High proportion of subjects entered and/or finished the study? Subjects blind to treatment? Data gatherers blind to treatment? Analysis performed blind? 

Main outcome of a metaanalysis

The main outcome is the overall magnitude of the effect...
…and how it differs between subjects, protocols, researchers 

magnitude of the effect of metaanalysis characteristics

It's not a simple average of the magnitude in all the studies.
Metaanalysis gives more weight to studies with more precise estimates. Other things being equal, this weighting is equivalent to weighting the effect in each study by the study's sample size 

The weighting factor for each effect (from each individual study) is calculated using one or more of the following:

the confidence interval or limits
the test statistic (t, chi squared, F) pvalue For controlled trials, can also use… Standard deviations (SDs) of change scores Posttest SDs (but almost always gives much larger error variance). Sample size 

A metaanalysis reflects only

reflects only what's published.
Statistically significant effects are more likely to get published: “Publication bias.” published effects are biased high (toward more positive outcomes). Most metaanalytic software evaluates for publication bias. 

Clinical practice guidelines considered obsolete after

5.8 years


Generic Outcome Measures for MetaAnalysis characteristics

You can combine effects from different studies only when they are expressed in the same units.
In most metaanalyses, the effects are converted to a generic dimensionless measure 

Main Outcome Measures for MetaAnalysis

standardized difference or change in the mean (Cohen's d)
percent or factor difference or change in the mean correlation coefficient; relative frequency (relative risk, odds ratio) 

what is the cochrane collaboration?

good source of metaanalytic wisdom
an international nonprofit academic group specializing in metaanalyses of healthcare interventions 

8 steps in a systematic review

1. Formulate review questions
2. Define inclusion and exclusion criteria 3. Locate studies 4. Select studies 5. Assess study quality 6. Extract data 7. Analyze and present results 8. Interpret results 

Components of a Good Systematic Review Question

Population
Interventions: *Influences breadth of implications *Combined treatments? Outcomes: *Breadth of implications 

In highquality systematic reviews, inclusion and exclusion criteria need to be

rigorously and transparently reported
before the fact inclusion and exclusion criteria>avoid changing criteria as review progresses or studies may be included on basis of their results If decisions explicit it enables them to be justified 

Selection of studies systematic review characteristics

Apply inclusion checklist
At least two raters, each blinded to the other. A third rater is used to break ties more than three raters used, but more rare. Conduct interrater agreement on study inclusion/exclusion Maintain a log of rejected trials 

Most important design criteria for a study relate to

its internal validity
Extent to which an experiment rules out alternative explanations of the results (ie: other variables wellcontrolled) Degree to which a researcher can be confident that an independent variable is what changed behavior, not the extraneous variables 

coding manual for systematic review characteristics

Specifies and describes what data should be extracted
pilottested with several articles and revised accordingly before actual use 

coding sheet for systematic review definition

Reviewer reads through a study and fills out the data extraction sheet


Purpose of coding manual and coding sheet for a systematic review

Helps to minimize error and bias in the judgments of the coding process


About your clinical bottomline of systematic review....

Remember, even if the systematic review says that the treatment is not effective… this will inform your practice.
Only when the systematic review is poorly designed (which includes poor external validity and/or internal validity) should you not use it to inform your practice. You can have a good SR that is comprised of poor studies. 

Content validity definition

: Does the measure cover the full range of the concept’s meaning?


Criterion validity definition

Can scores obtained with one measure be accurately compared to those obtained using another (more established) measure? (Two types: Concurrent and predictive)


Construct validity definition

A measure should fit well with other measures of similar theoretical concepts.
