From: Scott, Bruce Hillman
Registration of trials beforehand seems to me to be A Good Thing.
I'd be particularly interested in having documented beforehand the
selected endpoints and which interventions or markers are actually
being studied. The paranoid part of my brain is constantly worried about
data mining or changing endpoints.
I wonder what the unintended consequences will be
---REPLY---
It's hardly paranoia. A recent study in JAMA documented in a large cohort of
studies--identified through a regional ethics review panel--that only a
third of studies approved were ever published, and of the published studies
only 15% actually reported a primary endpoint that had been prospectively
identified as such in the original protocol. See abstract below.
Edward Greeno
EMPIRICAL EVIDENCE FOR SELECTIVE REPORTING OF OUTCOMES IN RANDOMIZED TRIALS
COMPARISON OF PROTOCOLS TO PUBLISHED ARTICLES
An-Wen Chan, MD, DPhil; Asbj�rn Hr�bjartsson, MD, PhD; Mette T. Haahr,
BSc; Peter C. G�tzsche, MD, DrMedSci; Douglas G. Altman, DSc
JAMA. 2004;291:2457-2465.
CONTEXT Selective reporting of outcomes within published studies based
on the nature or direction of their results has been widely suspected, but
direct evidence of such bias is currently limited to case reports.
OBJECTIVE To study empirically the extent and nature of outcome
reporting bias in a cohort of randomized trials.
DESIGN Cohort study using protocols and published reports of
randomized trials approved by the Scientific-Ethical Committees for
Copenhagen and Frederiksberg, Denmark, in 1994-1995. The number and
characteristics of reported and unreported trial outcomes were recorded from
protocols, journal articles, and a survey of trialists. An outcome was
considered incompletely reported if insufficient data were presented in the
published articles for meta-analysis. Odds ratios relating the completeness
of outcome reporting to statistical significance were calculated for each
trial and then pooled to provide an overall estimate of bias. Protocols and
published articles were also compared to identify discrepancies in primary
outcomes.
MAIN OUTCOME MEASURES Completeness of reporting of efficacy and harm
outcomes and of statistically significant vs nonsignificant outcomes;
consistency between primary outcomes defined in the most recent protocols
and those defined in published articles.
RESULTS One hundred two trials with 122 published journal articles and
3736 outcomes were identified. Overall, 50% of efficacy and 65% of harm
outcomes per trial were incompletely reported. Statistically significant
outcomes had a higher odds of being fully reported compared with
nonsignificant outcomes for both efficacy (pooled odds ratio, 2.4; 95%
confidence interval [CI], 1.4-4.0) and harm (pooled odds ratio, 4.7; 95% CI,
1.8-12.0) data. In comparing published articles with protocols, 62% of
trials had at least 1 primary outcome that was changed, introduced, or
omitted. Eighty-six percent of survey responders (42/49) denied the
existence of unreported outcomes despite clear evidence to the contrary.
CONCLUSIONS The reporting of trial outcomes is not only frequently
incomplete but also biased and inconsistent with protocols. Published
articles, as well as reviews that incorporate them, may therefore be
unreliable and overestimate the benefits of an intervention. To ensure
transparency, planned trials should be registered and protocols should be
made publicly available prior to trial completion.
Top of Document