News
Failure to include dropouts can skew research results, UB study finds
-
Print
-
Comments
-
“One of three claims of effectiveness of interventions made in top general medical journals might be wrong.”
A new UB study of publications in the world’s top five general medical journals has found that when clinical trials do not account for participants who dropped out, results are biased and may even lead to incorrect conclusions.
Published recently in the British Medical Journal, the methodological study consisted of a systematic analysis of 235 clinical trials published in the world’s top five general medical journals between 2005 and 2007 that claimed a statistically significant effect.
“We found that in up to a third of trials, the results that were reported as positive—in other words, statistically significant—would become negative—not statistically significant—if the investigators had appropriately taken into consideration those participants who were lost to follow-up,” says Elie A. Akl, lead author and associate professor of medicine, family medicine and social and preventive medicine in the UB schools of Medicine and Biomedical Sciences and Public Health and Health Professions. He also has an appointment at McMaster University.
“In other words, one of three claims of effectiveness of interventions made in top general medical journals might be wrong,” he says.
In one example, a study that compared two surgical techniques for treating stress urinary incontinence found that one was superior. But in the analysis published this month, it was found that 21 percent of participants were lost to follow-up. “When we reanalyzed that study by taking into account those dropouts, we found that the trial might have overestimated the superiority of one procedure over the other,” Akl says.
He explains that it always has been suspected, but never proven, that loss to follow-up introduces bias into the results of clinical trials. “The methodology we developed allowed us to provide that proof,” he says.
The methodology that he and his co-authors developed consists of sensitivity analyses, a statistical approach to test the robustness of the results of an analysis in the face of specific assumptions: in this case, assumptions about the outcomes of patients lost to follow-up.
“This study gives us a better understanding of the problem of loss to follow-up in clinical trials and provides us with better tools to address it,” Akl says.
“This methodology will allow those who conduct the trials and those who use their results, including clinicians, other scientists, developers of clinical guidelines, policymakers and bodies like the Food and Drug Administration, to better judge the risk of bias,” he concludes.
The studies that were analyzed had been published in Annals of Internal Medicine, British Medical Journal, the Journal of the American Medical Association, Lancet and the New England Journal of Medicine. To be included in the analysis, trials had to have reported a significant effect.
Akl led this major study, funded by Pfizer, which took three years to complete. His co-authors, 20 clinical epidemiologists, are from the following institutions: McMaster University; University Hospital Basel; Kaiser Permanente Northwest; Hospital for Sick Children in Toronto; Institute for Work and Health, Universitè de Sherbrooke; University Children’s Hospital Tuebingen; Pontificia Universidad Catolica de Chile; Tel Aviv University; the University of Ottawa; the University of Freiburg; and the University of Oxford.
Reader Comments