Sensitivity and Specificity Calculator

Calculate diagnostic test accuracy metrics including PPV, NPV, and likelihood ratios

Diagnostic Test Analysis

2×2 Confusion Matrix

Disease PresentDisease AbsentTotal
Test Positive
True Positive (TP)
✅✅
False Positive (FP)
❌✅
0
Test Negative
False Negative (FN)
✅❌
True Negative (TN)
❌❌
0
Total000

Matrix explanation:

  • True Positive (TP): Disease present, test positive - Correct positive result
  • False Positive (FP): Disease absent, test positive - Incorrect positive result
  • False Negative (FN): Disease present, test negative - Incorrect negative result
  • True Negative (TN): Disease absent, test negative - Correct negative result

Percentage of population with the disease (needed for Bayesian PPV/NPV calculation)

Example: COVID-19 Rapid Test

Scenario: Rapid Antigen Test Evaluation

Sample: 1000 individuals tested

True Positives: 85 (correctly identified positive cases)

False Positives: 20 (healthy individuals testing positive)

False Negatives: 15 (infected individuals testing negative)

True Negatives: 880 (healthy individuals testing negative)

Calculated Results

Sensitivity: 85/(85+15) = 85.0% - Good at detecting infection

Specificity: 880/(880+20) = 97.8% - Excellent at confirming non-infection

PPV: 85/(85+20) = 81.0% - 81% of positive results are correct

NPV: 880/(880+15) = 98.3% - 98.3% of negative results are correct

Accuracy: (85+880)/1000 = 96.5% - Overall test accuracy

Key Metrics Guide

Sensitivity (TPR)

Ability to correctly identify positive cases

High sensitivity = Few false negatives

Specificity (TNR)

Ability to correctly identify negative cases

High specificity = Few false positives

PPV & NPV

Predictive values depend on disease prevalence

Higher prevalence = Higher PPV

Clinical Application Tips

🔬

Screening tests should have high sensitivity to minimize missed cases

🔬

Confirmatory tests should have high specificity to minimize false alarms

🔬

Consider prevalence when interpreting PPV and NPV

🔬

Likelihood ratios help assess diagnostic value

🔬

Balance sensitivity and specificity based on clinical consequences

Understanding Diagnostic Test Statistics

Core Concepts

Diagnostic test evaluation uses a 2×2 confusion matrix to calculate key performance metrics. These statistics help clinicians understand how well a test performs in detecting or ruling out disease.

Sensitivity vs Specificity Trade-off

There's often a trade-off between sensitivity and specificity. Adjusting test thresholds to increase one typically decreases the other. The optimal balance depends on the clinical consequences of false positives versus false negatives.

Remember: A test with 95% sensitivity will miss 5% of actual cases (false negatives), while a test with 95% specificity will incorrectly label 5% of healthy individuals as positive (false positives).

Key Formulas

Basic Metrics:

Sensitivity = TP / (TP + FN)

Specificity = TN / (TN + FP)

Accuracy = (TP + TN) / (TP + TN + FP + FN)

Predictive Values:

PPV = TP / (TP + FP)

NPV = TN / (TN + FN)

Likelihood Ratios:

LR+ = Sensitivity / (1 - Specificity)

LR- = (1 - Sensitivity) / Specificity

Prevalence Effect

PPV and NPV are highly dependent on disease prevalence. In low-prevalence populations, even highly specific tests may have poor PPV due to many false positives.

Screening vs Diagnosis

Screening tests prioritize sensitivity to catch all cases, while confirmatory tests prioritize specificity to avoid false diagnoses.

ROC Analysis

ROC curves plot sensitivity vs (1-specificity) to evaluate test performance across different thresholds and compare multiple tests.