• Shuffle
    Toggle On
    Toggle Off
  • Alphabetize
    Toggle On
    Toggle Off
  • Front First
    Toggle On
    Toggle Off
  • Both Sides
    Toggle On
    Toggle Off
  • Read
    Toggle On
    Toggle Off
Reading...
Front

Card Range To Study

through

image

Play button

image

Play button

image

Progress

1/36

Click to flip

Use LEFT and RIGHT arrow keys to navigate between flashcards;

Use UP and DOWN arrow keys to flip the card;

H to show hint;

A reads text to speech;

36 Cards in this Set

  • Front
  • Back

Sensitivity

= a / (a+c)


= true positive/ all actual positive

Specificity

= d / (b+d)


= true negative/ all actual negative

Positive Predictive Value

= a / (a+b)


= true positive/ all tested positive

Negative Predictive Value

= d / (c+d)


= true negative/ all tested negative

Diagnostic accuracy

= Percent agreement = [(a+d) / (a+b+c+d)] * 100%


= percentage of true results out of all results

Chamberlain's percent positive agreement

= (a+d) / (a+b+c)





  • In the context of reliability, useful when most subjects are negative, which would artificially inflate diagnostic accuracy
  • In the context of accuracy, useful when the population has many true negatives, which would artificially inflate diagnostic accuracy

OARP

= a/b


= true positive/ false positive




Odds of being Affected Given a Positive Result

DOR

= ad/bc


= true results/ false results




Diagnostic Odds Ratio: summary measure of test effectiveness

Youden's J

sensitivity + specificity - 1

PSI

PPV + NPV - 1




Predictive Summary Index

Likelihood ratio for a positive test (result)

= sensitivity / (1-specificity)

Likelihood ratio for a negative test (result)

= (1-sensitivity) / specificity

Kappa statistic

= (O-C) / (1-C)



where O =(a+d) / (a+b+c+d)


and C = [(a+c)(a+b) + (b+d)(c+d)] / [(a+b+c+d)^2]



------------------------


Use an estimate table with the products of the margins to calculate C the same way as O for an estimate table. May be easier.

Post-test Odds

= pre-test odds * LR

Pre-test Odds

If you have no other information about a subject healthstatus before administering a test, prevalence servedas the subject’s pre-test probability of disease

Interpret Kappa

  • Numerator: Observed agreement beyond chance
  • Denominator: Maximum agreement possiblebeyond chance
  • Kappastatistic quantifies how much beler their levelof agreement is than that which results fromchance alone as a proportion of the maximalagreement beyond chance.
  • Scale of -1 to 1, with 1 being the best and -1 being the worst generally
  • Good Kappa is > 0.7

Interpret Youden's J


  • This is a summary measure of a test’s validity – howclose the test results (T+/T-) correspond to the truth(D+/D-).
  • It’s theoretical range is -1 to 1, though a practicalrange for a well functioning test is 0 to 1.• Youden’s J is negative when a test’s results are misleading,when they are negatively associated with the truth
  • Assigns equal weight to sensitivity and specificity• Thus, in settings where you care more about one than theother, you should use the weighted version of Youden’s J.

Youden's J: 1/J

= NND = Number of diseased patientsneeded to examine in order to correctlydiagnose one person with the disease.

Interpret PPV

If the test result is positive in thepatient, it is the probability the patient actually has thedisease.

Interpret NPV

If the test result is negative in thepatient, it is the probability the patient actually doesnot have the disease

Interpret PSI


  • Summary measure of the predictability of diseasestatus for a diagnostic test.
  • Specifically, PSI is the net gain in certainty of a test’spredictability of disease status • The pre-test (prior) probability of disease is the prevalence• A gain in certainty that disease is present occurs when theposterior probability of disease presence (=PPV by Bayes’sRule) exceeds the prior probability of disease• A gain in certainty that disease is absent occurs when theposterior probability of no disease (=NPV by Bayes’s Rule)exceeds the prior probability of no disease (1-prevalence)

1/PSI

= NNP = Number of test-positivepatients needed to examine in order tocorrectly predict the diagnosis of one personwith a positive test result

Interpreting Likelihood Ratios

The LR reflects the magnitude of evidence that aparticular test result (t) provides in favor of thedisease being present relative to the disease beingabsent



  • LR = 1: The test result is equally likely in patients withand without the disease
  • LR > 1: The test result is more likely among patientswith the disease than without
  • LR < 1: The test result is more likely among patientswithout the disease than with

If you want to avoid false positives and invasive/costly follow-up...

maximize specificity

If you want to avoid false negatives and hence catch a disease early...

maximize sensitivity

Are sensitivity and specificity inherent to a given test

No, since they depend on the degree of diseaseseverity in a population




Sensitivity and specificity condition on the totalsof true disease status (a+c for sensitivity and b+dfor specificity), so they are independent of thedisease prevalence if the gold standard is prefect

Are PPV and NPV are dependent on theprevalence on the disease

Yes, as they do notcondition on the totals of true disease status




PPV and NPV must be interpreted within theprevalence context of the popula9on of interest

Positivity criterion

cutoff for a positive test

Relation between sensitivity and specificity


  • Increasing sensitivity decreases specificity
  • Increasing specificity decreases sensitivity

Receiver operator characteristic (ROC) curves

The ROC curve is a plot of the true positive rate (i.e.sensitivity) versus the false positive rate (1-specificity)



Interpreting ROC Curves


  • The closer an ROC curve moves to the upper leftcorner, the more accurate a test it represents– i.e. the upper left hand corner represents a test with 100%true positivity and 0% false positivity
  • As the (cut-point) criterion for a test becomes morestringent, the point on the curve corresponding tosensitivity and specificity moves down and to the left(lower sensitivity, higher specificity)
  • As the criterion for a test becomes less stringent, thepoint on the curve corresponding to sensitivity andspecificity moves up and to the right (highersensitivity and lower specificity)

measure of intra-raterreliability is needed

When one person measures the same itemtwice (or more)

measure of interraterreliability is needed

When two (or more) people measure the sameitem once (or more) each

Coefficient of variation

= (standard deviation / mean) x (100%)

Interpret Coefficient of variation


  • generally used to report thereliability of laboratory or assay measurements ofcontinuous variables
  • most useful in comparing the variability of severaldifferent samples, each with different arithmetic means
  • This is true because higher variability usually is to beexpected when the arithmetic mean increases--the C.V.accounts for this variability

Utility of Bayes’ theorem


  • Bayes’ theorem demonstrates, in probabilisticterms, that the posterior probability of an eventcan be determined from knowledge of sensitivity,specificity, and prevalence.
  • It also makes more explicitly obvious than theformulae presented earlier for PVP the role ofprevalence in the determination of posteriorprobabilities.