I've gone over the Ioannidis paper multiple times (IIRC Linda even had a thread talking through it? Maybe that was another site though...). Admittedly I'd be happy if someone can talk us through some of the math as it's been some time since my studying of statistics.
There were a few discussions on it in the old forum, as I recall. Not sure if about on this one.
In any case perhaps you can tell me where this idea that we can accurately [assess] the extent of the rot comes from? Here's a few excerpts from the paper pointing to specific problems - it doesn't seem to say anything about the problem being under control or not? -->
Well, first of all, I said it was expected, not under control- if by under control you mean "not a problem". The entire point of Iaonnidis' paper is that there are problems that increase the bias (ie: error) of papers. IIRC Ioannidis put it around 50% and that is consistent with other evaluations I've seen.
For example, from the article by
Richard Horton, editor in chief of the Lancet that you cite, in that statement, he notes that these problems are now getting due attention, however there has not been enough action.
Dr. Horton bemoans that "Those who have the power to act seem to think somebody else should act first". While I think he overlooks the efforts that are going on, I think he's probably right to an extent - there certainly needs to be more action, much more in fact, and especially when it comes to institutional change it takes courage to be the frontrunner - and no-one wants to make the situation worse!
The article highlights difficulties in coming up with what those institutional changes should be. The competing priorities. The pros and cons of each proposed solution. Yes change is urgently needed. But the reality is that it takes time, especially at the institutional level.
There is a LOT of work being done studying these issues. The Ioannidis paper has been cited almost 3800 times! I read somewhere that it is the most cited paper on PLOS. These replication studies are important in assessing the problem and in identifying solutions. There are many papers across diverse fields making recommendations to improve research methods.
Even the Nature article you linked mentions Ioannidis view on the problem of retractions:
Yes, the iceberg that Ioannidis is referring to is the 50%, only a small percentage of which is fraud. My point was not that fraud isn't a problem, but that in terms of focusing resources and making real improvements, the focus should be on the balance of the 50% (for example, that are discussed in those corrollaries you cited).
Well the problems mentioned in this thread go beyond failed replications. There's also the question of how often experiments have been accepted as truth despite later failure to replicate.
I highlighted my surprise that so many scientists seemed to be accepting as reliable studies that they really should not have been. This, in my opinion, is one of the most important discoveries of these replication studies.
Well yes! Haven't we been talking about this the whole discussions?
The article even notes attempts to downplay the problem are, arguably, part of the problem:
I don't think we should be downplaying it, I've stated so explicitly. And I agree that some degree of rhetoric is often necessary to galvanize action. But when trying to assess the situation accurately, or when discussing what should be done to address the issue, I think we need to take back and take a more sober assessment of the situation.
“What is not helping is a reluctance to dig into our past and ask what needs revisiting. Time is nigh to reckon with our past.”
There was a
Symposium on the reproducibility and reliably of biomedical research held in 2015 in the UK.
Look at editor-in-chief Dr. Richard Horton's comments on the symposium:
I've discussed that above. And I agree that we need to dig in to the past. My point was that this process is being done. It is not being ignored. There are thousands of papers on it!
Without massive replication in every field, how do we know what results from the past are valid?
There is going to have to be a lot of going back and reviewing old results. Which ones will have to be decided on by the researchers. Again, the history of science is that of gradually gaining better understanding of these issues, and correcting wrong ideas. It's a continuing process. There will have to be a balance between going back and moving forward, hopefully using methodologies based on our current understanding of best practices.
Of course this doesn't necessarily address other potential problems like fraud, prevention of new ideas, resistance to retraction, etc.
There are lots of issues to address!
I think there has been a nuance to my point of view that I may not have adequately expressed. I am not saying there aren't problems, there are, big ones. What I'm saying is that there have always been problems! The history of science is one of slowly, often very slowly, figuring out better ways to do science. And there will always be problems. No matter how good our understanding of best practices becomes, we have to expect a certain amount of impefection in the system. We can express shock everytime we learn that someone has not met this or that standard, but those situations are always going to happen. In any field, in any industry, in any activity for that matter.
For all the problems that have been identified over the last ten years, we're still in a better situation than we were - science wise - ten years before that, and ten years before that, etc.etc. etc. We may wish that science plugged all its holes earlier, but is that realistic? Given the nature of the task?