The current issue of Nature contains a damning article on the unreliability of results from cancer research.
The scientific community assumes that the claims in a preclinical study can be taken at face value — that … the main message of the paper can be relied on and the data will, for the most part, stand the test of time. Unfortunately, this is not always the case. ….
Over the past decade, before pursuing a particular line of research, scientists (including C.G.B.) in the haematology and oncology department at the biotechnology firm Amgen in Thousand Oaks, California, tried to confirm published findings related to that work. Fifty-three papers were deemed ‘landmark’ studies [because they were published in prestigious journals and received multiple citations]. It was acknowledged from the outset that some of the data might not hold up …. Nevertheless, scientific findings were confirmed in only 6 (11%) cases. Even knowing the limitations of preclinical research, this was a shocking result. ….
[A]n attempt was made to contact the original authors, discuss the discrepant findings, exchange reagents and repeat experiments under the authors’ direction, occasionally even in the laboratory of the original investigator.
C. Glenn Begley & Lee M. Ellis, “Drug development: Raise standards for preclinical cancer research“, Nature 483 (29 March 2012), pp. 531–533.
Writing for Nature, the authors were careful to assert “These investigators were all competent, well-meaning scientists who truly wanted to make advances in cancer research”.
Conversing with a journalist, Begley was more candid:
Begley met for breakfast at a cancer conference with the lead scientist of one of the problematic studies.
“We went through the paper line by line, figure by figure,” said Begley. “I explained that we re-did their experiment 50 times and never got their result. He said they’d done it six times and got this result once, but put it in the paper because it made the best story. It’s very disillusioning.”
Sharon Begley, “In cancer science, many ‘discoveries’ don’t hold up“, Reuters, 28 March 2012.
Ms Begley, the reporter, is not related to C. Glenn Begley.
Publication of poor, non-replicable research results is not limited to medicine, or to biology. In may fields, researchers engage in sensationalism to get published in a high-profile journal and further their careers. At least in experimental sciences, there is the possibility in principle of attempting to replicate research results. In non-experimental quasi-sciences like economics, we don’t even have that luxury. Data-mining is rampant in econometrics. Caveat emptor.
HT Arnold Kling