In Trials We Trust

All science depends on trust. Trust that experiments are repeatable, observations objectively made, and research conducted without bias. We trust the reputation of journals and the expertise of reviewers. With the fragmentation and specialisms of modern science, trust is perhaps more important than ever, but it’s been necessary from the start. When Galileo claimed there were moons around Jupiter and Robert Hooke first drew the anatomy of a louse, not everybody had to have access to a telescope or microscope to accept such discoveries. The troubles Galileo faced in Rome are a case in point: if every 16th Century European had needed a telescope to believe Galileo’s claims, the Catholic church would probably have let him write what he liked.

I’ve never seen the Galilean satellites – in fact, now I think about it, I can’t say that I’ve even seen Jupiter – and though I’ve certainly come a lot closer to a louse, it wasn’t nearly as close as Hooke. Yet I don’t doubt the existence of moons or lice; nor do I tend to doubt the diagnoses and prescriptions of a doctor.

What basis is there for this trust? Ten years ago, in September 2001, 13 of the leading medical journals, including The Lancet and the Journal of the American Medical Association, published an editorial titled “Sponsorship, authorship, and accountability” [subscription needed], written collectively by the journals’ editors, which announced certain changes to the section on publication ethics in the Uniform Requirements for Manuscripts Submitted to Biomedical Journals: Writing and Editing for Biomedical Publication, a document developed by the International Committee of Medical Journal Editors (ICMJE) that sets out its widely used policy.

Here’s the opening paragraph of the editorial: “As editors of general medical journals, we recognise that the publication of clinical-research findings in respected peer-reviewed journals is the ultimate basis for most treatment decisions. Public discourse about this published evidence of efficacy and safety rests on the assumption that clinical-trials data have been gathered and are presented in an objective and dispassionate manner. This discourse is vital to the scientific practice of medicine because it shapes treatment decisions made by physicians, and drives public and private heath-case policy. We are concerned that the current intellectual environment in which some clinical research is conceived, study participants are recruited, and the data analysed and reported (or not reported) may threaten this precious objectivity.”

The intellectual environment they refer to is one in which the publication of a clinical trial in a high-profile journal “may be used to market drugs and medical devices, potentially resulting in substantial financial gain for the sponsor”. Such an environment, where the potential for bias is clear, threatens the essential basis for trust. The editors didn’t mince their words in spelling this out: “the use of clinical trials primarily for marketing, in our view, makes a mockery of clinical investigation and is a misuse of a powerful tool”.

The key change to the editorial policy was therefore designed to mitigate conflicts of interest between researchers and their sponsors. A requirement was introduced that authors sign a declaration fully disclosing details of their own and their sponsor’s involvement in a trial. Several journals also adopted the practice of asking at least one of the authors to accept responsibility for the way the trial was conducted, and to confirm that he or she had unrestricted access to data and was in control of the decision to publish.

These changes reflected the changes in the economics of research. “Many clinical trials are done to facilitate regulatory approval of a device or drug rather than to test a specific novel scientific hypothesis”, the editors noted. “The pharmceutical industry has recognised the need to control costs and has discovered that private non-academic research groups – i.e., contract research organisations (CROs) – can do the job for less money and with fewer hassles than can academic investigators.” In the supposedly healthy capitalist competition between CROs and academic institutions for funding, sponsors are often in a position to influence the terms of a trial, “terms that are not always in the best interests of academic investigators, the study participants, or the advancement of science generally”.

In a further step to sure up the foundations of trust, the ICMJE initiated a policy in 2005 that required researchers to register a new trial and publish details of its design in a publicly available database (such as the WHO’s International Clinical Trials Registry) before the trial began. This openness is of course for the good of objective evidence-based science, since by publicly announcing the existence of a trial before its results are known, we ought to be able to avoid the skewed advancement of knowledge due to the selective publication only of positive outcomes. As another editorial by the ICMJE puts it: “Honest reporting begins with revealing the existence of all clinical studies, even those that reflect unfavorably on a research sponsor’s product.”

So where do we stand today? Unfortunately, it’s reasonable to assume that many trials worldwide are not registered before they start, and if their results turn out to be negative we remain none the wiser. Moreover, an unwillingness to publish negative results appears to be a growing trend across science as a whole. In a recent study, Daniele Fanelli, of the University of Edinburgh, looked at 4,600 scientific papers published between 1990 and 2007 and found a decrease in the number that contradicted a stated hypotheses. In an interview with Research Fortnight [subscription needed], Fanelli said it’s not clear whether the negative results are genuinely lacking or whether they are being hidden, but “this shows that we’re going towards more predictable research and biased results.”

Fanelli believes part of the problem lies with the system of citation-based prestige. As Research Fortnight reports, “Scientific journals, he says, are competing with each other to get more citations, and positive results are a way to collect them. On the other hand, he argues, researchers strive to get published in the highest-ranking journals to compete for jobs and grants.” In his paper, published in Scientometrics, Fanelli further argues that “A system that disfavours negative results not only distorts the scientific literature directly, but might also discourage high-risk projects and pressure scientists to fabricate and falsify their data”.

It’s a bleak assessment. Philip Campbell, editor in chief of Nature, also speaking to Research Fortnight, suggested that “some newer journals [such as PLoS One and Science Reports] that select papers more on technical thoroughness and methodology than on impact and results could offset the trend”. But the intellectual environment that so concerned those 13 editors in 2001 hasn’t gone away. I’ve had experience of seeking security clearance from a sponsor before publishing results and, given the current state of public funding for science, private sponsors are set to become an even more important source of research income.

This week Research Fortnight also reports on an analysis carried out by the Campaign for Science and Engineering (Case) in the UK that highlights “an alarming decline in funding despite political pledges to defend such investment” and accuses the government of “creative accountancy in efforts to meet its claim that it would maintain the science budget at £4.6 billion a year”. In its report, titled “Public funding of UK science and engineering: Putting government rhetoric to the test”, Case notes that “This pledge was made possible by redefining what the term ‘science budget’ meant. For instance, although capital funding for research equipment and facilities was slashed by almost half, such spending no longer counts towards the official science budget”. Commenting on the report, Martin Rees, former president of the Royal Society and member of Case’s advisory council, told Research Fortnight that “The message of this important survey is a disquieting one … The UK’s cost-effective science base and university system are at risk”.

Voltaire is said to have defined medical treatment as the art of pouring drugs of which one knows nothing into a patient of whom one knows less. That we know so much more about medicine than Voltaire is largely thanks to clinical trials and evidence-based medicine. But if we’re to know as much about what doesn’t work as what does, or occasionally to learn about treatments that turn out to work better than a sponsor’s sacred cash cow, all trials must be open and conducted with accountability. It’s been 10 years since the ICMJE’s first move to encourage this behaviour, but the need today is perhaps greater than ever.

Image: flickr | Images_of_Money

Related Posts

Leave a Reply

Your email address will not be published. Required fields are marked *