Use LEFT and RIGHT arrow keys to navigate between flashcards;
Use UP and DOWN arrow keys to flip the card;
H to show hint;
A reads text to speech;
36 Cards in this Set
- Front
- Back
Sensitivity |
= a / (a+c) = true positive/ all actual positive |
|
Specificity |
= d / (b+d) = true negative/ all actual negative |
|
Positive Predictive Value |
= a / (a+b) = true positive/ all tested positive |
|
Negative Predictive Value |
= d / (c+d) = true negative/ all tested negative |
|
Diagnostic accuracy |
= Percent agreement = [(a+d) / (a+b+c+d)] * 100% = percentage of true results out of all results |
|
Chamberlain's percent positive agreement |
= (a+d) / (a+b+c)
|
|
OARP |
= a/b = true positive/ false positive Odds of being Affected Given a Positive Result |
|
DOR |
= ad/bc = true results/ false results Diagnostic Odds Ratio: summary measure of test effectiveness |
|
Youden's J |
sensitivity + specificity - 1 |
|
PSI |
PPV + NPV - 1 Predictive Summary Index |
|
Likelihood ratio for a positive test (result) |
= sensitivity / (1-specificity) |
|
Likelihood ratio for a negative test (result) |
= (1-sensitivity) / specificity |
|
Kappa statistic |
= (O-C) / (1-C)
where O =(a+d) / (a+b+c+d) and C = [(a+c)(a+b) + (b+d)(c+d)] / [(a+b+c+d)^2]
------------------------ Use an estimate table with the products of the margins to calculate C the same way as O for an estimate table. May be easier. |
|
Post-test Odds |
= pre-test odds * LR |
|
Pre-test Odds |
If you have no other information about a subject healthstatus before administering a test, prevalence servedas the subject’s pre-test probability of disease |
|
Interpret Kappa |
|
|
Interpret Youden's J |
|
|
Youden's J: 1/J |
= NND = Number of diseased patientsneeded to examine in order to correctlydiagnose one person with the disease. |
|
Interpret PPV |
If the test result is positive in thepatient, it is the probability the patient actually has thedisease. |
|
Interpret NPV |
If the test result is negative in thepatient, it is the probability the patient actually doesnot have the disease |
|
Interpret PSI |
|
|
1/PSI |
= NNP = Number of test-positivepatients needed to examine in order tocorrectly predict the diagnosis of one personwith a positive test result |
|
Interpreting Likelihood Ratios |
The LR reflects the magnitude of evidence that aparticular test result (t) provides in favor of thedisease being present relative to the disease beingabsent
|
|
If you want to avoid false positives and invasive/costly follow-up... |
maximize specificity |
|
If you want to avoid false negatives and hence catch a disease early... |
maximize sensitivity |
|
Are sensitivity and specificity inherent to a given test |
No, since they depend on the degree of diseaseseverity in a population Sensitivity and specificity condition on the totalsof true disease status (a+c for sensitivity and b+dfor specificity), so they are independent of thedisease prevalence if the gold standard is prefect |
|
Are PPV and NPV are dependent on theprevalence on the disease |
Yes, as they do notcondition on the totals of true disease status PPV and NPV must be interpreted within theprevalence context of the popula9on of interest |
|
Positivity criterion |
cutoff for a positive test |
|
Relation between sensitivity and specificity |
|
|
Receiver operator characteristic (ROC) curves |
The ROC curve is a plot of the true positive rate (i.e.sensitivity) versus the false positive rate (1-specificity) |
|
Interpreting ROC Curves |
|
|
measure of intra-raterreliability is needed |
When one person measures the same itemtwice (or more) |
|
measure of interraterreliability is needed |
When two (or more) people measure the sameitem once (or more) each |
|
Coefficient of variation |
= (standard deviation / mean) x (100%) |
|
Interpret Coefficient of variation |
|
|
Utility of Bayes’ theorem |
|