In this context, understanding the reliability of study designs in vaccine surveillance systems is important to ensure that evidence is appropriately used by all stakeholders.
Historical comparator designs, which compare background rates of events in a general population versus observed rates amongst a vaccinated cohort, have been regularly used by regulators and other vaccine safety researchers. Those studies may generate a high number of false positives, according to a recent study published in Frontiers in Pharmacology. The paper, which studied the methods used for surveillance of the H1N1, flu, and other recent vaccines, highlight a need to further evaluate study design in this critical time of COVID-19 vaccine surveillance.
Age-sex adjustment and empirical calibration were among the measures used to produce more reliable surveillance monitoring findings, according to the study “Bias, Precision and Timeliness of Historical (Background) Rate Comparison Methods for Vaccine Safety Monitoring: An Empirical Multi-Database Analysis” led by Xintong Li, a DPhil candidate at the University of Oxford, and supported by the OHDSI and EHDEN open science communities.
A Common Method Of Surveillance
Regulatory agencies have previously used the method of historic comparisons, a design that focuses on the rate of adverse events following immunization and compares it to the expected incidence rate within a general population.
“This is a method that relies on the comparison of historical data, which is then compared to post-vaccine events,” said senior author Daniel Prieto-Alhambra MD MSc PhD, Professor of Pharmacoepidemiology at the University of Oxford. “Vaccinated people are however not always comparable to the general population. They tend to be older and more vulnerable than the average citizen. Therefore, the comparison of post-vaccine versus historical rates tends to detect false positives.”