Making Sense of Sensitivity, Specificity and Predictive Value: A Guide for Patients, Clinicians and Policymakers

In this post, I will discuss sensitivity, specificity and positive predictive value in relation to diagnostic and screening tests. Many more people have become aware of these measures during the Covid-19 pandemic with the increased use of lateral flow and PCR tests.

In clinical practice and public health, sensitivity, specificity, and predictive value are important measures of the performance of diagnostic and screening tests. These measures can help clinicians, public health specialists and the public to understand the accuracy of a test and to make informed decisions about its use in patient care.

Sensitivity: The proportion of people with a disease who test positive on a diagnostic or screening test.

Sensitivity = True Positives / (True Positives + False Negatives)

Specificity: The proportion of people without a disease who test negative on a diagnostic or screening test.

Specificity = True Negatives / (True Negatives + False Positives)

Positive predictive value (PPV): The proportion of people who test positive on a diagnostic test who actually have the disease.

Positive Predictive Value = True Positives / (True Positives + False Positives)

Negative predictive value (NPV): The proportion of people who test negative on a diagnostic test who actually do not have the disease.

Negative Predictive Value = True Negatives / (True Negatives + False Negatives)

How do we Interpret sensitivity, specificity, and predictive value?

Sensitivity and specificity are linked measures. A test with high sensitivity is good at identifying people with a disease, but it may also produce false positives in people who do not have the disease. A test with high specificity is good at identifying people who do not have a disease, but it may also produce false negatives in people who do have the disease. In general, as sensitivity increases, specificity decreases; and vice versa.

Positive Predictive Value (PPV) depends on the prevalence of the disease in the population being tested. In a population with a high prevalence of disease, a positive test result is more likely to be a true positive. Conversely, in a population with a low prevalence of disease, a positive test result is more likely to be a false positive.

In clinical and public health practice this means that a test can have a high sensitivity and specificity but if it is being carried out in a population with a low prevalence, most positive tests are false positives; thereby limiting the value of a positive test. This is why a test can vary in its performance in primary care (where prevalence of a condition is often low) and in hospital care (where prevalence will generally be higher).

The Covid-19 pandemic brought global attention to the importance of diagnostic test parameters such as sensitivity, specificity and positive predictive value. Initial Covid-19 tests often prioritised sensitivity to capture as many positive cases as possible. However, as the pandemic progressed, the need for more specific tests became clear to minimise false positives that could distort public health strategies. For example, a false positive test could result in a person isolating or staying off work or school unnecessarily.

A test with a high Negative Predictive Value means that it is good at ruling out disease in people who test negative. This is important for public health interventions, such as contact tracing, where it is important to identify people who are unlikely to be infected with a disease so that they can be excluded from further monitoring and isolation.

The pandemic underscored that no single measure—sensitivity, specificity, or predictive value—could offer a complete picture of a test’s effectiveness.

Example of a diagnostic test: A Covid-19 test has a sensitivity of 90%, meaning that 90% of people with a Covid-19 infection will test positive on the test. The test has a specificity of 98%, meaning that 98% of people without Covid-19 will test negative on the test.

The PPV of the test will vary depending on the prevalence of Covid-19 in the population being tested. For example, if 5% of people in a population have Covid-19, then the PPV of the test will be 70%. This means that 70% of people who test positive on the test will actually have Covid-19.

If the prevalence of Covid-19 is 1%, then the PPV will be 31%. This means that 31% of people who test positive on the test will actually have Covid-19. Hence, at times of low prevalence, many positive Covid-19 tests will be wrong.

You can use a Positive Predictive Value Calculator to see how changing sensitivity, specificity and prevalence alters the result.

Screening tests have also become more important as health systems across the world try to detect conditions such as cancer earlier in their clinical course in an attempt to improve health outcomes survival.

Example of a screening test: A mammogram is a screening test for breast cancer. It has a sensitivity of 85%, meaning that 85% of women with breast cancer will have a positive mammogram. The mammogram has a specificity of 90%, meaning that 90% of women without breast cancer will have a negative mammogram. The PPV of the mammogram will vary depending on the prevalence of breast cancer in the population being screened. For example, if the prevalence of breast cancer in a population is 1%, then the PPV of the mammogram will be 8%. This means that 8% of women who have a positive mammogram will actually have breast cancer. Hence, many women who don’t have breast cancer will need investigation to confirm the result of their screening test.

Conclusion: Sensitivity, specificity, and predictive value are important concepts in the evaluation of diagnostic and screening tests. Clinicians, public health specialists and the public should understand the performance of a test before using it in patient care.

In addition to sensitivity, specificity, and predictive value, there are other factors that clinicians should consider when choosing a diagnostic or screening test, such as the cost of the test, the risks and benefits of the test, and the availability of alternative tests.

No diagnostic or screening test is perfect. All tests have the potential to produce false positives and false negatives. Clinicians, the public and policy-makers should use judgment to interpret the results of any test; and to make decisions about patient care, screening programmes and public health policy.