The National Academy of Medicine’s report, Improving Diagnosis in Health Care, establishes the role of laboratory professionals and laboratory testing in the diagnostic process. The report also describes factors which contribute to errors in diagnosis, including uncertainty in ordering and interpreting tests. One factor that contributes to uncertainty is that clinicians do not always consider probabilistic reasoning (sometimes referred to as Bayesian reasoning) in test interpretation – in other words, they do not always take into account the prevalence of the condition or disease, the pre-test probability that the patient has the condition, test sensitivity and specificity, and post-test probability of the condition. Incorrectly relying upon a false-positive or false-negative test result as a diagnostic indicator can result in over-or under-treatment, which leads to poor patient outcomes. When combined with other elements that can complicate the diagnostic process, such as cognitive bias and mental shortcuts, it is easy to see how errors occur. Laboratory professionals should be prepared to contribute to the discussion about evidence-based choice of, and interpretation of, the diagnostic tests we offer.
For analyses performed in the laboratory, we have access to information about test sensitivity (how well the test detects those who have the disease or condition; true positives) and specificity (how well the test excludes those who do not have the disease or condition; true negatives). These test performance metrics provide information about the chance of a positive test in a patient with the diagnosis, but clinicians are often looking for different information: the probability that a patient does or does not have the condition, given a particular test result. This can be calculated using the test sensitivity and specificity data, in conjunction with information about prevalence of the condition and the patient’s pre-test probability of having the condition.
A mnemonic that is useful in some, but not all, testing circumstances is Snout/Spin. SnNOut reminds us that a negative result on a highly-sensitive test can frequently rule out a condition, while a positive test from a highly-specific test (SpPIn) can often rule in a disease. However, additional calculations can provide further information which can aid in diagnosis.
Since sensitivity and specificity assess test performance in populations who are known to have or not have the condition in question, based on a gold standard test, these metrics do not adequately represent the ability of the test to determine who has the condition out of a population of individuals who may or may not be affected. The Positive Predictive Value (PPV; true positives divided by all positives) and Negative Predictive Value (NPV; true negatives divided by all negatives) present information about testing patients, as opposed to testing the test. This value provides the probability that a positive test is a true positive and that a negative result is a true negative.
The Likelihood ratio (LR) is a calculation that combines sensitivity and specificity into one value: the probability of an individual with a given condition having a positive test (LR+; sensitivity divided by 1-specificity) or a negative test (LR-; 1-sensitivity divided by specificity). This ratio is helpful when choosing the best test to rule in a disease (the one with the highest LR+) or to rule out a condition (the lowest LR-). The LR can also be used to estimate post-test change in the probability of disease in a patient with a given pre-test probability, using a nomogram or calculation of an odds ratio: a positive result on a test with a high LR+ increases the post-test probability, while a negative result on a test with a small LR- decreases it.
Even when using tests with good performance metrics (i.e., high sensitivity and specificity), diagnosis can be complicated by intermediate or conflicting test results and co-existing conditions, among other factors. The calculations described above may not be difficult, but it is not realistic to expect that clinicians have time to complete math problems when ordering a laboratory test or applying the results to a patient’s care. In addition, the healthcare community now encourages involving patients in their healthcare decisions, but many untrained individuals are not prepared to employ this type of analysis, especially in the midst of a health crisis. We should consider how laboratory professionals could work with Information Technology and providers to provide this information for clinical decision support tools, linked to test results reports, or as patient educational materials, to aid with diagnosis and decision making. Presenting this material in alternate formats may also help to convey critical information. Stating data as natural frequencies instead of numerical probability has been shown to be helpful for both clinicians and patients. For example, “X out of 10,000 people have this condition: of the X who have the condition, x will have a positive result on this test. People who do not have the condition may also have a positive result on this test: ## out of 9900 will have a positive test.”. Additional modes of communicating this information include pictographs, graphs and other visual depictions. Making this information available to providers and patients could result in improved laboratory utilization and better patient outcomes.
Balogh, E., Miller, B., and Ball, J., Eds. National Academies of Sciences, Engineering, and Medicine. (2015) Improving diagnosis in health care. Washington, DC: The National Academies Press.
Garcia-Retamero, R., Cokely, E., and Hoffrage, H. (2015) Visual aids improve diagnostic inferences and metacognitive judgment calibration. Frontiers in Psychology (6): 932. https://doi.org/10.3389/fpsyg.2015.00932
Operskalski, JT, and Barbey, AK. (2016) Risk literacy in medical decision-making. Science, 352 (6284), 413-414. DOI: 10.1126/science.aaf7966. 2016.
Pewsner, D., Battagila, M., Minder, C., Marx, A., Bucher, H., and Egger, M. (2004). Ruling a diagnosis in or out with “SpPIn” and SnNOut”: a note of caution. British Medical Journal 329: 209.
Riegelman, R. K. (2013). Studying a study & testing a test: reading evidence-based health research. Philadelphia: Wolters Kluwer.