I am saying that a lot of the problems with science are not exactly statistical.
I agree, but I was trying to determine if you agree with me about the importance of sufficiently powered studies and preregistration.
For example, suppose we take a theory X (special relativity say) and we build a more general theory (general relativity say) on top of that, and on top of that we build yet more layers - the Standard Model and String Theory all in the space of just over 100 years.
The problem is that one of those steps can be faulty because it may turn out that several theories can explain the same data, but only one was known at the time when the decision was made. This means that all the subsequent work is faulty. However, in practice scientists feel awfully uncomfortable if you discuss possible alternatives to one of the early theories, because it would invalidate so much other work - so they tend to scoff at such work and suppress it.
I agree that there are always scoffers. Of course I also think there are always those looking to make their mark by discovering an error or making a big find that changes everything.
We know that it is part of the human condition to be biased and to be attached to our current views. This applies to us all. So there's little value, in my opinion, in accusing someone else of it because without a doubt the accuser suffers from it too, and likely does his or her own bit of scoffing. So let's put this aside - because I'm not sure it takes us anywhere (at least it doesn't seem to ever have, so what is the likelihood it will going forward). All these accusations do, imp, is to make people dig their heels in further.
We know that this bias can be overcome and views can change. The question is: what is the best way to achieve that (both in others and ourselves). My suggestion is to shift our focus from motive questioning and personal attacks, towards methodology designed to help us do end runs around our biases.
Another example would be the series of steps in cosmology that are supposed to justify the use of red shifts as measuring distances - we have discussed that before. If you pile too many shaky steps on top of each other you get a house of cards, but the normal scepticism of science gets overruled by the instinct for self preservation! The house of cards becomes untouchable.
I agree that scientists are as biased in favour of their prexisting view as the rest of us, but I've never seen any clear evidence that big discoveries, including those that overturn previous findings, result in scores of scientists losing their jobs, or result in less work for scientists. In fact, don't new discoveries, even those that overturn previous findings, tend to result in more science being done? Meaning more jobs for scientists?
And as much as scientists, again like the rest of us, can scoff at alternative views, is it actually true that they simply ignore them? For example, if I correctly recall out previous redshift discussion, when I looked at whether scientist had seriously looked into that theory you had accused them of ignoring, I seem to recall finding a bunch of papers giving it a serious examination. I think it's easy to forget amongst the scoffing that there is also often a lot of serious critique that is done (more the reason not to scoff, as it serves to distract from the real work).
Humans are always going to be biased. They can't be condemned for that. Rather, in evaluating whether their rejection of this or that idea is warranted we need to look closely at how they came to that opinion.
The only answer, I think, is to develop some branches of science much more slowly - to accept that each theory is tentative for much longer and be ultra careful not to build on sand.
I agree with this sentiment, but I think it's important to add some meat on the bones here. What does moving more slowly mean in practical terms? To me, this means setting certain evidentiary standards that are required before building.
This entails, I suggest refraining confidence in findings until the have been demonstrated using suitably rigorous methodology. And it may also mean going back to previous findings when we improve our understanding of what the most reliable methodologies are.
In this "replication crisis" that had been mentioned a lot on the forum recently, I'm less concerned with the fact that many experiments failed to replicate (I think that is to be expected even when things are working properly) than I was with the fact that apparently so many working scientists were willing to base further work on results that had not been reliably tested!
I suggest that it the focus on best practices that will help get new ideas accepted in the face of bias. And that that is how ideas they challenge the existing views will justifiably overcome them.