The Tip of an Iceberg of Wasted Time

Wednesday, May 22, 2019

Pharma blogger Derek Lowe excerpts a paper from Nature, a preeminent science journal:

Image by Lucas Vasquez, via Unsplash, license.
[I]n two decades, we will look back on the past 60 years -- particularly in biomedical science -- and marvel at how much time and money has been wasted on flawed research...

...many researchers persist in working in a way almost guaranteed not to deliver meaningful results. They ride with what I refer to as the four horsemen of the reproducibility apocalypse: publication bias, low statistical power, P-value hacking and HARKing (hypothesizing after results are known). My generation and the one before us have done little to rein these in.
The "publication bias ... against negative results," reminds me of a lost year of my life as a postdoc long ago. The PI ended up having his right hand man run the same experiments I was running as a pilot for obvious reasons. He also got negative results. This vindicated me, but netted me a grand total of ... zero publications ... for all that work.

Lowe also mentions something that will seem comical at first:
P-hacking is another scourge. And sadly, it's my impression that while some people realize that they're doing it, others just think that they're, y'know, doing science and that's how it's done. They think that they're looking for the valuable part of their results, when they're actually trying to turn an honest negative result into a deceptively positive one (at best) or just kicking through the trash looking for something shiny (at worst). I wasn't aware of the example that Bishop cites of a paper that helped blow the whistle on this in psychology. Its authors showed that what were considered perfectly ordinary approaches to one's data could be used to show that (among other things) listening to Beatles songs made the study participants younger. And I mean "statistically significantly younger". As they drily termed it, "undisclosed flexibility" in handing the data lets you prove pretty much anything you want. [links omitted]
This is funny ... for the few seconds before you consider the cost in human lives in at least the following forms: the time spent earning the money, often taken as taxes to pay for it; the time of the scientists involved; the time wasted in the attempt to apply such "results;" and -- when the government sees a regulatory interest -- time wasted by anyone coerced into following a law or regulation excused by such "results."

What Lowe reports is a travesty, but it isn't the half of it.

-- CAV


Today: Fixed typo in image attribution link. 


Dinwar said...

The bias against publishing negative results would be fairly easy to fix. One could establish a "Journal of Negative Results", which publishes ONLY experiments which did not disprove the null hypothesis, failed to produce statistically meaningful results, or otherwise failed.

Most authors discussing these issues miss another important aspect of failure to publish negative results: scientists may waste time reproducing an experiment that has already been tried and failed. The only way in science currently to learn about failed methodologies is word of mouth, and if you don't hear about it that way you can easily waste time and money on an experimental protocol that's been tried before--even numerous times--and failed. A journal specifically dedicated to failed experiments/studies would be a fantastic way to make these failed experiments part of the scientific literature, so we all have access to it.

I think part of the issue is cultural. The culture views science as giving us The Answer, so any incorrect ideas are to be discarded and ignored. The reality is, science is a discussion--a multi-generational, multicultural back-and-forth among experts, with the absolute guiding principle that everything we say must be grounded in facts of reality. Viewed that way, an experiment that doesn't work isn't a failure; it's a legitimate contribution to the discussion.

Gus Van Horn said...


True, and even just the prospect of having one's experiment flagged as not reproducible might make lots of people think twice before publishing.