• Shuffle
    Toggle On
    Toggle Off
  • Alphabetize
    Toggle On
    Toggle Off
  • Front First
    Toggle On
    Toggle Off
  • Both Sides
    Toggle On
    Toggle Off
  • Read
    Toggle On
    Toggle Off
Reading...
Front

Card Range To Study

through

image

Play button

image

Play button

image

Progress

1/201

Click to flip

Use LEFT and RIGHT arrow keys to navigate between flashcards;

Use UP and DOWN arrow keys to flip the card;

H to show hint;

A reads text to speech;

201 Cards in this Set

  • Front
  • Back
  • 3rd side (hint)
Accuracy
how close the measurement is to the true value. Declines as measured values approach the detection limit of the test
Reliability
consistency on repeat measurements
Precision
this relates to the limit of detection of the test. The detection limit is the lowest value that can be measured by a test.
Validity
whether the test measures what it purports to measure
Screening
2ndary prevention, we will target these individuals to identify them sooner before symptoms arise
Surveillance
Population level detection. An early warning system that we have a problem. Triggers investigation and probably action. Primary prevention in an indirect sense.
Sensitivity
a/a+c
Specificity
d/b+d
Positive predictive value
a/a+b
Negative predictive value
d/c+d
T/F Sensitivity and Specificity are inversely related
True
T/F In screening tests we’d rather have more false positives
True
In diagnostic tests we’d rather have more false_____
negatives
Receiver Operator Characteristics (ROC)
Systematic method to determine best single cutoff value when test yields continuous results. Plot sensitivity (true positive rate) on y axis against 1 - specificity (false positive rate)
What is the optimal position in the ROC and why?
The best position is the upper left hand corner because it will have the 100% Sensitivity and 0% False Positive Rate
As prevalence goes up what happens to positive predictive value and negative predictive value?
Positive predictive value goes up and negative predictive value goes down
Kappa (κ) Statistic
A useful measure of inter- or intra-observer agreement. Calculate agreement beyond chance alone. k > 75% Excellent agreement, k < 40% poor agreement
Cumulative Incidence
New incidence / person-years
Incidence Density
cases/ person-years
Incidence
New incidence/ Period of time
Prevalence
number of people with disease/ Population at risk at a point in time
Period Prevalence
number of people with disease over a period of time / Population at risk at mid-period
What are factors increasing both incidence and prevalence?
Greater case ascertainment. Enhanced diagnostic methods, and More liberal criteria in disease definition.
Factors increasing prevalence only:
Improved (non-curative treatment), Out-migration of healthy people from the population, In-migration of people with disease into the population
What was Last’s description of epidemiology?
studying differences, find out why, and it will be applied to help alleviate health problems.
Miasmatic concept of health
nasty airs that were invisible that would settle down in low lying areas, causing disease
“shoe leather” epidemiology
get out of the office and see what’s actually going on
What are the three factors that contribute to disease in a host?
Agent, environment, vector
Who was the first person to receive an Abiocor transplant and was this cost effective?
Robert tool. No because it didn’t improve his quality of life because he stayed in the hospital.
Cost-benefit analysis
Measure cost and savings.
Cost-effectiveness analysis
Measure cost for a specified outcome
Cost-utility analysis
Use QALY or DALY’s as measures of outcome
Primary Prevention
stop disease from happening in the first place. Health promotion, prevent onset of disease, always the preferred method
Secondary Prevention
identify disease early to improve outcome
Tertiary Prevention
do nothing to alter full manifestation of disease or injury, attempt to reduce long-term effects.
Saal brothers, what did they “invent”
They invented a device that purportedly reduced lower back pain. It wasn’t really that good.
Cochrane Controlled Trials Register (CCTR)
Bibliographic database of controlled trials from systematic search of journals and other sources. Published or underway. Searchable
Death certificates governed by state statutes

How is reporting handled?
variations exist in reporting requirements and specific terminology.
Immediate (or Principle) Cause
Final disease, injury, or complication resulting in death.
Underlying (or Antecedent or Intermediate) Cause
the disease or injury that initiated the chain of events that led directly and inevitably to death.
Underlying (or Contributory) Causes
The condition present before and leading to the intermediate or immediate cause of death.
Mechanism of Death
How death occurred, NOT a disease or injury.
Neonatal maternal ratio
Deaths within 4 weeks postnatal
Perinatal mortality ratio
Deaths 28 weeks gestation to 1 week postnatal
Direct Standardization
Start with death rates in Town A and standardize to Town B’s age
Multiply each stratum’s death rate in Town A by the number of people in that stratum in Town B.
We get the death rate given Town A’s death rates but Town B’s age structure
Homer paradox
Disappearance (reversal) of an observed difference when data is stratified and standardized
Indirect Standardization
We can also perform the exact same calculation the other way around
Start with death rates from the standard population and multiple by age structure of study population
Mortality Country Group I
developing countries with high mortality
Mortality Country Group II
developing countries with low mortality
Mortality Country Group III
developed / industrialised countries
Survival Analysis
Commonly used technique in medical statistics
Take a group of people and follow-up them over a period time
We can do this from birth to death used fixed time intervals
Survival Analysis
When do we use a Wilcoxon Test?
If mortality changes over time
Survival Analysis
When do we use a log rank test?
If mortality is constant
Survival Analysis
What is a regression curve of the survival analysis called?
Cox or proportional hazards model
Ecological Fallacy
Error that we make when we look at group data and two sets of group data, we don’t know any information on individuals, but we make assumptions about the individuals based on the population level information.
Causation
Causation refers to a relationship in which one condition precedes and must be present in order for another outcome to occur
Effect Modifier
A third variable which is part of the causal chain and which influences the frequency of disease
Inductive reasoning:
going from specific observations to generate a general principle.
Deductive reasoning:
applying general principles to a specific situation.
Confounder
A third variable that is independently associated with both the exposure and the outcome
Bias
A systematic error involving one group in the study
Disease Cluster
“An unusual number, real or perceived, of health events (for example, reports of cancer) grouped together in time and location." - CDC
Induction period:
period of time between appearance of causal action and onset of disease
Latency:
period of time between disease occurrence and detection
Hill’s Criteria
Temporal relationship
Strength of the association
Dose-response relationship
Replication of the findings
Biological plausibility
Consideration of alternate explanations
Cessation of exposure
Consistency with other knowledge
Specificity of the association
Possible
> 0% likelihood
Probable
> 50% likelihood
Descriptive Epidemiology
Describes general characteristics of disease
Comparisons to exposure often implicit or indirect
Analytic Epidemiology
Comparison of disease to exposure is direct and explicit
Case-control, cohort and interventional studies
Case report =
one case of the disease

Careful description of disease and circumstances
May arise from routine surveillance
Case series =
more than one case of the same disease

Careful description of disease and circumstances
May arise from routine surveillance
Correlational Studies
Compare disease frequency in relation to another factor at a population level
May compare disease rates based on geography, time, occupation, etc.
Often called ‘ecological studies’ as in Gordis
Cross-Sectional Studies
Measure number of cases of disease among individuals at one point in time (prevalent cases) in a defined population

Often called prevalence study
Surveys which frequently use interviews or questionnaires
Prevalence study Surveys
Cross-sectional studies
Correlation vs cross-sectional in terms of data
Correlation = aggregate data
Cross-sectional = individual data
NHANES
Sample of 5,000 people over 12 months continuously since 1999
2 parts:
Home interview
Health exam
NHANES findings
Growth charts
Blood lead and gasoline/solder
Prevalence of obesity
Undiagnosed type 2 diabetes
T/F A key difference between cross-sectional studies and correlational studies is that we have individual data in cross-sectional studies
True. We have individual data in cross-sectional studies
Ecological or Descriptive Epidemiology
Describes general characteristics of disease
Comparisons to exposure often implicit or indirect
Analytical Epidemiology
Comparison of disease to exposure is more direct and explicit
Two types: 1. Observational
2. Interventional
The Framingham Study was this kind of study:
Prospective Cohort
Cohort Study
Individuals are studied who are disease free
Establish exposure status
Follow forward in time to determine disease status
May be either prospective or retrospective
For a cohort study, the relative risk:
Incidence of disease in exposed/ Incidence of disease in unexposed
Prospective Cohort Study
Subjects have not developed disease yet when you perform the study
Must follow them into the future to see who gets disease
The Framingham Heart Study
Sample of 5,127 adult residents in Framingham, Massachusetts taken in 1948.
Standardized biennial cardiovascular examination
Daily surveillance of hospital admissions, death information and information from physicians and other sources outside the clinic.
Retrospective Cohort Study
Subjects have developed disease when you start the study
You use information on exposure status when they were disease free and follow them forward in time (but prior to the start of the study) to see who gets disease
Sources of Cohort
Can use one large general cohort that contains both exposed and unexposed and compare disease within cohort

Can use a cohort with a very high level of exposure and compare disease either to another unexposed cohort or general population

The unexposed and exposed people should resemble each other in every way other than having the exposure
What is the hierarchy of preference of studies
Clinical Trial
Prospective cohort study
Retrospective
Case- control
Cohort Studies – Disadvantages?
Costly
Retrospective – you need to rely on data that is old and it is not designed for your study
Logistical – expensive and time consuming.
Case-Control Studies
Subjects are selected based on having disease under study (cases) or not having disease under study (controls)
Look back in time to see if there are any differences in exposure between the groups
Sometimes called retrospective studies
What is another name for Case-Control Studies?
Retrospective studies
Criteria for cases should be _____.
Exclusive
What are the characteristics of a control?
They must resemble cases in every way other than developing the disease under study.

Apply same exclusion criteria as cases

Appropriate selection of controls is critical step in minimizing bias and confounding
T/F Controls are required to be healthy
False
Matching
selecting the controls so that they are similar to the cases in certain characteristics
Group
Individual or matched pair
Odds Ratio
Calculate odds that someone WILL have exposure relative to NOT having exposure
Express as a ratio of those with disease (cases) compared to those without disease (controls)
Nested Case-Control Study
A study within a previous study
Synonyms: Correlational study
Ecological study
Synonyms: Cross-sectional study
Prevalence survey
Synonyms: Case-control study
Case-comparison study, case-referent study
Synonyms: Cohort study
Longitudinal study, follow-up study, incidence study
Synonyms: Prospective cohort study
Concurrent study
Synonyms: Retrospective cohort study
Historical cohort study
Synonyms: Clinical trial, intervention study
Experimental study, therapeutic trial, randomized [blinded] controlled trial
Chronological Synonyms: Case-control study
Retrospective study
Chronological Synonyms: Cohort study
Prospective study
Chronological Synonyms: Prospective cohort study
Concurrent prospective study
Chronological Synonyms: Retrospective cohort study
Non-concurrent study
What are the two major categories of Analytical studies?
Observational: Case-control and Cohort
Interventional: Clinical
Analytical Studies
Observational. Definition
Investigator is passive and does not influence exposure
Case-control and cohort studies
Analytical Studies
Interventional. Definition
Investigators allocate exposure and follow-up for disease
Clinical trials
What is a clinical trial?
A planned experiment of an intervention in humans to investigate:
Efficacy
Safety
What makes interventional studies the "gold standard"
Randomization and blinding make this the gold standard
Volunteer bias
health conscious, likely to be compliant to treatment. The treatment is considered the correct solution for the people under study.
Intervention Study
Like a cohort study, want a homogenous group without the outcome under study
Clearly defined with explicit exclusion criteria
Must agree to participate
Volunteer bias
Reasons for stopping interventional studies.
May stop early:
if clear benefit
If clear harm

Need to develop guidelines for early termination before study is started
Independent data monitoring group
Phases of a drug trial
Preclinical Testing
Phase I – Safety and Pharmacology
Phase II – Pilot Efficacy
Phase III – Extensive Clinical Trial
Phase IV – Long Term Effects
Post-marketing surveillance
Drug Trial Phase I
Safety and Pharmacology
Drug Trial Phase II
Pilot Efficacy
Drug Trial Phase III
Extensive Clinical Trial
Drug Trial Phase IV
Long Term Effects
Pre-clinical testing
~3.5 years
Animal testing
File Investigational New Drug Application (IND) with Food and Drug Administration (FDA)
Phase I – Safety and Pharmacology
~ 1 year
First introduction into humans
Usually small number (<100) of healthy volunteers
Goal is determine safety and mode of action
Study pharmacokinetics, route of administration
Phase II – Pilot Efficacy
~ 2 years
200-500 volunteers with disease
Usually randomized to new drug or existing treatment
Demonstrate preliminary safety and efficacy
Phase III – Extensive Clinical Trial
~ 3 years
1000 – 3000 volunteers with disease, multicenter
More complete assessment of safety with longer use and verify efficacy
File New Drug Application (NDA) with FDA
~ 2.5 years
Phase IV – Long Term Effects
After drug registration with FDA
May be required by FDA
> 3000 participants
Explore longer term effects, specific adverse outcomes
More reflective of routine use
Postmarketing surveillance
Based on reports by health care providers
HealthWatch
Not systematic studies
Crossover design
Study that switches patients from treatment A to B and from B to A allowing the patients to be their own control
What are the advantages and disadvantages of Crossover design?
Advantages

Allows each subject to start act as an own control. Most response. Need the fewest number of people because


Disadvantage?
Effect of B may really be a effect of A
Crossover Design

How can we reduce the chance that the the Effect of B is really an effect of A?
we use washover period - no treatment in the middle between the A and B to limit the disadvantage
Factorial design
We can explore synergistic and antagonistic effects

Allows us to study more than one treatment
Who is responsible for the creation of Blinding?
Benjamin Franklin
Who did Franklin Debunk in the Franklin Commission?
Antoine Mesmer – mesmerize patients in order to help them out… not really the case

Franklin Commision. He identified it that it worked essentially because the patients wanted for it to work.

The patients were blindfolded and the mesmerizing effect was gone
Blinding is the best method to reduce:
Chance
Bias
Confounding
Blinding (aka Masking). Three types. what does it require?
Single patient
Double patient and doctor
Triple
In order to blind people, you must have a placebo or sham for comparison
Placebo
an intervention designed to simulate medical therapy but not believed by the clinician or investigator to be a specific therapy for the target condition.
Nocebo Effect
Patients get worse no matter what
What did Haygarth disprove? What did he invent?
the Perkins Patent Tractor, 1799, a metal rod could "cure" people. He used a wooden rod which looked like the tractor to show that people still believed that they were being "cured". Disproved the tractor's efficacy.

the sham treatment
Methods to deal with non-compliance
Run-in or wash-out period

All participants receive treatment OR placebo before randomization

Those who do not comply in pre-randomization phase not included in study

Monitor compliance
Pill counts
Pick high risk patients
Frequent contact with participants
Financial incentives
Testing for presence of drug
Who developed intention to treat analysis and what is it?
Intention to treat analysis. Once randomized, always analyzed.”
All patients in each group are followed-up and analyzed according to what they were assigned even if they didn’t complete or comply with assigned therapy.

Sir Richard Peto
Why use Intention to Treat Analysis?
1.Guards against conscious or unconscious attempts to influence the results of the study by excluding odd outcomes.

2. Preserves the baseline comparability between treatment groups achieved by randomization.

Reflects the way treatments will perform in the population by ignoring adherence when the data are analyzed.
Efficacy
works under Phases I-III of the drug trial
Effectiveness
works in the real world Phase IV
Efficiency
cost versus benefit. Number needed to treat helps us with this component.
The Three E’s of an Intervention
Efficacy ≡ Does it work under the ideal conditions of a RCT?
(Phases I – III)
Effectiveness ≡ Does it work in the real world?
(Phase IV)
Efficiency ≡ cost versus benefit. Number needed to treat helps us with this component.
Number Needed to Treat
NNT is the number of people you’d need to treat for the prescribed period of time in order to prevent one episode of disease.
NNT - Example

Adverse outcomes in treated group = 4% (0.04)
Adverse outcomes in placebo group = 10% (0.10)

calculate it.
Adverse outcomes in treated group = 4% (0.04)
Adverse outcomes in placebo group = 10% (0.10)
Absolute risk reduction is 0.10 – 0.04 = 0.06
1/0.06 = 17
Therefore you would need to treat 17 people to prevent one case of disease

*remember that the NNH (number needed to harm) is calculated in a similar fashion.
Meta-analysis

what does box size mean?

How does CI change when sample size increases?

What do the diamonds represent?
Size box is bigger sample size.

So when sample size Increases we should expect the confidence interval to go down.

Combined odds ratio here based off of the meta-analysis
Steps in Performing Meta-Analysis
Identify studies
Evaluate studies
Abstract data
Perform statistical analysis
Publication bias
“Tendency of editors (and authors) to publish articles containing positive findings, especially ‘new’ results, in contrast to reports that do not yield ‘significant’ associations.” (Last)
What indicates a publication bias as seen on a funnel plot?
Using meta-analysis funnel plot we see that there is no symmetry.
How do we reduce publication bias when doing a meta-analysis?
Researchers tend to choose positive results to report... Must balance these out with negative results - or what they would show
Metaanalysis

Fixed effects model
conclusions derived in the meta-analysis are valid only for the studies included
Metaanalysis

Random effects model
assumes that the studies included in the meta-analysis belong to a random sample of a universe of such studies
Metaanalysis

Test of homogeneity

what statistic is used?
What does homogeneity mean?
What test is this similar to?
Q statistic
Null hypothesis that all studies are homogenous (effect size equal)
Analogous to Chi square test
Meta-analysis?
Sensitivity Analysis

compare studies based on ____
compare results based on _____
Compare studies based on quality
Compare results based on model choice
Factors
are conditions that can possibly affect the outcome, might be controlled by an investigator
Outcome
Endpoints or Responses
What kind of study is shown below? Example: Assign one of several dosages (placebo, 10 mg, 20 mg) to each subject.
Intervention/experimental study
Name some variable types
Binary, nominal, ordinal, continuous, ratio
Binary variable
also known as a dichotomous variable one example is gender or the presence or absence of somethings
Nominal scale
naming. Is simply a classification scheme
Ordinal variable
rank. scale such as from 1-10, or strongly agree -> strongly disagree unlike continuous variables, arithmetic does not make sense
Continuous variable
variable that exists over an interval with EQUAL INTERVALS (IQ scores, centrigrade) Order is implied numerically and the arithmetic does makes sense
Ratio
continuous scale + a true zero (for instance distance or Kelvin)
Ranked variables
are ordinal,
Discrete Variable
one for which there is a finite number of potential values which the variable can assume between any two points on the scale. (1,2, 3, 4, etc..) It’s a count!!
Continuous variable
one which theoretically can assume an infinite number of values between any two points on the scale. (1.008, 1.009, 1.01, etc..)
Three basic ways to summarize data
Distributions, central tendency, and dispersion
Distributions and give an example
The distribution of a variable is a graphical or mathematical description of a variable that captures “everything” Example: Range
Real Frequency Distribution. Describe the y and x axis
y-axis (vertical) is usually describing the frequency with which the corresponding value of the x-axis (horizontal) are expected to be found
What is the purpose of a normal distribution? What assumption are we making?
The normal distribution can help us make inferences about the real distributions, which we assume is a sample from a population with a normal distribution (Chapter 10)
Quantitative variable
things with numbers (temperature)
Qualitative
descriptive (such as color)
Latent variable
variables that are not directly observable (such as motivation and intelligence)
What are the three measures of central tendency?
Mean - average Mode - most frequent statistic Median - Middle number
What are the different measures of dispersion?
percentiles, mean deviation, variance, and standard deviation
Choosing a statistical test. Nominal
Chi square
Choosing a statistical test. Nominal (dichotomous)
t-test
Choosing a statistical test Nominal (multiple variables)
ANOVA
Parameters
Characteristics of populations. (two examples are the mean and standard deviation)
Dependent variables
also called endpoints. This changes according to changes in factors/independent variables
Independent variable
also called a factor. The dependent variable changes based off of this.
T/F It is reasonable to obtain the entire population data to obtain parameters
False. It is not practical to get population data!!! So, parameters are usually unknown and we often need to estimate them from samples
Statistics
These are values that we calculate from the samples So, a mean () of a population is a parameter. AND A mean (x) of a sample is “statistic.”
What is the purpose of calculating statistics?
We calculate statistics from samples to estimate the parameter of populations.
Mode
Measure of central tendency that describes: Most frequent score in the distribution
Median. describe it, where is it's use appropriate
Measure of central tendency. The point at which 50% of the scores fall below (when listed in numerical order). Appropriate to use when the data are ordinal or ratio.
Mean
Measure of central tendency The arithmetic average of scores in a distribution. Mean is the “average”
What determines the direction of skew? Describe a left and right skew and what this means in terms of the mean and median
Direction of tail determines skewness Negative: (left) mean < median Positive: (right) mean > median
in terms of mean median and mode

Right/+ skew: mean > median > mode

Left/ - skew: mean < median < mode
T/F Continuous variables assume a finite number of intermediate values
False - It assumes a infinite number of intermediate values
Overall Range. Description how do we obtain it
It is a measure of dispersion or the "spread of the data" Overall Range = Biggest value - smallest value Range is susceptible to the effect of outliers
Sample variance, s^2
Sample variance, s2 The average squared difference between the values of a variable and the sample mean.
Standard Deviation
An estimate of the average variability (spread) of a set of data measured in the same units of measurement as the original data. It is the square root of the variance.
Estimate
Estimates represent our best guesses about the value of a parameter.
Sampling error. What is it? Does a individual sample accurately estimate the population mean?
Sampling error = discrepancy Each sample has its own mean, which does not provide a perfectly accurate representation of it population. No Individual sample means tend to under or overestimate the population mean.
Compute the 95% Confidence Interval for the mean based on the following information: The sample mean = 2994 The sample std dev = 800 The standard error is 800 / 78 = 90.5 What does this mean?
~ 95% CI for the mean is 2994+2(90.5) 2994+/-181 We are ~ 95% sure that the actual mean of the larger population is between
Do we want a small or large Standard Error? Why?
We want a small SE = More confidence in our mean.
Standard Error (general idea, relationship between sample mean and population mean, how to calculate).
For each individual sample, you can measure the error (distance) between the sample mean and the population. The standard error provides a way to identify the “average” or standard distance between the sample mean and population mean. The standard error is calculated by dividing the sample deviation by the square root of the sample size.
What is typically used mean or median?
Median, because it is not as affected by outliers as the mean. It is more robust
What portion of the population (normal distribution) falls within one standard deviation? How about 2?
1 SD = 68.2% of the population 2 SD = 95.4% of the population