Friday, January 11, 2013

Has too much "spin" crept into study reports? Well according to the Annals of Oncology: One third of breast cancer studies flawed with bias in reporting efficacy and safety.


A third of randomized clinical trials (RCTs) in breast cancer had published results that showed bias in the reporting of endpoints, and two-thirds showed bias in reporting toxicity, authors of a literature review concluded.

Of 164 studies included in the review, 54 (32.9%) had positive results that were not based on the primary endpoint, which was not statistically different. Authors of the reports "used spin in an attempt to conceal bias," according to an article published online in Annals of Oncology.

The frequency of biased reporting increased to 59% when the analysis was limited to 92 studies that produced nonsignificant differences in primary endpoints.

"Bias in the reporting of efficacy and toxicity remains prevalent," Ian F. Tannock, MD, of the University of Toronto, and co-authors wrote in conclusion. "Clinicians, reviewers, journal editors, and regulators should apply a critical eye to trial reports and be wary of the possibility of biased reporting. Guidelines are necessary to improve the reporting of both efficacy and toxicity."

Because RCTs represent the gold standard for evaluation of a new therapy's efficacy and toxicity, appropriate trial design and objective reporting of results are essential. Bias in reporting can create false impressions about a therapy's safety and efficacy, and clinical decisions may be influenced by the reports, the authors noted in their introduction.

Spin (considered a form of bias) involves use of reporting strategies that emphasize the benefits of an experimental treatment, even when the primary outcome is nonsignificant. Spin might also be used to distract readers from nonsignificant results, the authors continued.

To evaluate the occurrence of bias and spin in breast cancer studies, investigators searched for articles published from 1995 to 2011. They defined bias as inappropriate reporting of the primary endpoint and toxicity, particularly in an article's abstract. Spin was defined as use of terminology in the abstract to suggest that a negative trial was positive, as based on outcomes other than the primary endpoint.

The authors trimmed an initial list of 568 articles to 164, consisting of 148 RCTs of systemic therapy in breast cancer, 11 evaluating radiation therapy, and five involving surgery. The trials were almost equally divided between the adjuvant and metastatic settings.

In 27 trials, overall survival was the primary endpoint, whereas disease-free or progression-free survival was the primary outcome of interest in the remaining 137 studies. In 30 cases, trials were identified as being included in ClinicalTrials.gov, and investigators in seven of those trials changed the primary endpoint in the final report.

Results showed that 72 (43.9%) trials yielded statistically significant differences in the primary endpoint in favor of the experimental arm. The remaining studies showed no difference between the experimental and control arms.

More than 90% of the studies were published in medium- or high-impact journals (median impact factor of 19). Date of publication did not influence bias or spin in reporting, according to the authors.

Limiting their analysis to 92 trials with nonsignificant endpoints, the authors found that almost 54 of the studies exhibited bias in reporting. Authors of negative studies were significantly more likely to exclude the primary endpoint in the concluding statement of the abstract (27% versus 7%, OR 5.15, P=0.001). The probability of bias did not differ between trials in the adjuvant versus metastatic setting.

Examination of toxicity reporting revealed bias in 110 of 164 trials. The authors found a significant association between biased reporting of toxicity and a statistically significant difference in the primary endpoint (OR 2.00, P=0.044). They found no association between bias in reporting toxicity and bias in reporting efficacy.

Biased reporting of toxicity was significantly associated with trials that had overall survival as the primary endpoint (OR 3.30, P=0.028). The journal impact factor and trial setting (adjuvant versus metastatic) did not influence bias in reporting toxicity.

About two-thirds (103) of the RCTs were industry funded. The authors found that the source of funding did not influence the likelihood of bias in reporting efficacy or toxicity.

Acknowledging limitations of the study, the authors noted that they studied only RCTs of breast cancer, excluded trials involving fewer than 200 patients, used subjective measures (bias and spin), used scales based on investigator interpretation of characteristics associated with bias, most of the trials were not included in ClinicalTrials.gov, and they did not search European Clinical Trials Registries.

The authors had no relevant disclosures.

No comments:

Post a Comment