Critiques of Science as Currently Praticed

  • Thread starter Sciborg_S_Patel
  • Start date
You always seem to approach it with a glass half empty attitude. Sure there are some bad egg types in science, as there are in every area of life. And yes it can be disappointing when we find out certain findings weren't as firm as some once thought. But most of all what these articles show is that doing science in the most rigorous manner is damn hard!
The problem is that scientists base their experiments and ideas on the results of others - there is a gigantic web of ideas all linked together. That means that a few bad apples can poison the web out of all proportions to their numbers. Think for a moment of the consequences if Halton Arp's ideas turn out to be true - masses of otherwise excellent work would become meaningless because the method used to measure astronomical distances was wrong.

A lot of the problem is the pressure to publish. Every scientists feels that - whether good or bad - and that limits the amount of effort that can be put into replications.

I think science needs to move far more cautiously if it is to operate properly. By a curious twist of fate, it is ψ that has probably moved the slowest (because it constantly has to defend its key experiments), for example, replications of ESP experiments are still being done, and it is the robustness of the various experiments that I find interesting.

David
 
The problem is that scientists base their experiments and ideas on the results of others - there is a gigantic web of ideas all linked together. That means that a few bad apples can poison the web out of all proportions to their numbers. Think for a moment of the consequences if Halton Arp's ideas turn out to be true - masses of otherwise excellent work would become meaningless because the method used to measure astronomical distances was wrong.

A lot of the problem is the pressure to publish. Every scientists feels that - whether good or bad - and that limits the amount of effort that can be put into replications.

I think science needs to move far more cautiously if it is to operate properly. By a curious twist of fate, it is ψ that has probably moved the slowest (because it constantly has to defend its key experiments), for example, replications of ESP experiments are still being done, and it is the robustness of the various experiments that I find interesting.

David

Yes, I agree that science should proceed cautiously, it's a theme I've echoed a number of times on this forum.

And yes, there are always going to be pressures to cut corners, etc. That is where the importance of methodology comes in.

The problem is not that many experiments fail to be replicated. That's to be expected. The problem is other scientists relying too much on experiments that have not been replicated. The mistake is believing that just being a good scientist and taking care should somehow guarantee reliable results. The problem is not that underpowered experiments are not replicated, the problem is relying on ubderpowered experiments.

What we see coming out of these meta studies is a better understanding of just how important measures such as pre-registration, replication, and not just replication but replication with sufficient power is. Parapsychology doesn't escape these issues. We see a lot of the same red flags there as we'll. You mention robustness of psi studies, I assume you're talking about the number of studies rather than the size of the studies.

This isn't a crack against parapsychology or any of those other sciences. As I've said, it takes time to learn what works and what doesn't. I've been pushing the Iadonnidis paper almost since the moment I joined Skeptiko. It's taken awhile for the findings in that paper, and similar others, to get the attention they deserve.

Your comment about proceeding cautiously are well taken. It is a message that is well worth pushing.

But I suggest the solution lies in focusing on methodology. Our human instincts aren't going to go away. The goal is to work around them. Not give into them.
 
Yes, I agree that science should proceed cautiously, it's a theme I've echoed a number of times on this forum.

And yes, there are always going to be pressures to cut corners, etc. That is where the importance of methodology comes in.

The problem is not that many experiments fail to be replicated. That's to be expected. The problem is other scientists relying too much on experiments that have not been replicated. The mistake is believing that just being a good scientist and taking care should somehow guarantee reliable results. The problem is not that underpowered experiments are not replicated, the problem is relying on ubderpowered experiments.
I think a lot of the issues are not really about statistics. For example, if you use a cell line that is supposed to be cancer cells of a particular type, and it has become contaminated with something else that grows faster, then over time it won't contain much of the original cell type at all! The experiments aren't statistically invalid - they are just wrong. Likewise, although Halton Arp's interpretation of red shifts requires some statistical interpretation, if he is right, a whole mass of other work becomes invalid.

That is not to say that bad statistics don't foul things up further :)

In one of Sci's quotes above, a scientist admitted to doing an experiment 6 times and getting his published result on just one occasion. That may well not have been a statistical issue at all, but some sort of contamination, etc.

If people really can influence the growth of cells in test tubes using thought alone, it is even possible that part of the problem is that psychic effects are at work. Someone desperately wants some cancer cells to die when he adds a certain drug, and he wills them to die without realising it!
What we see coming out of these meta studies is a better understanding of just how important measures such as pre-registration, replication, and not just replication but replication with sufficient power is.
You can't pre-register all studies. For example, a chemist may want to make X and try reacting A and B because they could react to make X, but maybe it depends on the exact temperature, or the kind of solvent used, etc. You find these things out by trial and error, but the validity of the result depends on the characterisation of the final product using NMR, Mass Spectrometry, etc.

The bottom line is that you need quality researchers who are prepared to work slowly (e.g.. plenty of repetitions) and treat theories as tentative for far longer than is currently the case. I also suspect that it would help to rein in the enthusiasts for modelling reality in terms of 10 dimensional spaces etc. Maths should be used to figure out the consequences of a theoretical suggestion (as was the case with Newton's laws), not to formulate theories.

David
 
I think a lot of the issues are not really about statistics.

To be clear, the studies that I'm talking about go through more than just the couple of points I've highlighted in this thread. I didn't mean for my post to represent the extent of issues to be concerned about.

In one of Sci's quotes above, a scientist admitted to doing an experiment 6 times and getting his published result on just one occasion. That may well not have been a statistical issue at all, but some sort of contamination, etc.

This is a good case study for the importance of pre-registration. If only one study gets published, if the author does not mention the other 5 in the paper, one can easily go and check and see that a lot of information has been left out. This should raise a red flag that the results of the published paper should be taken cautiously, and inquiries should be made of the author about the other attempts.

If people really can influence the growth of cells in test tubes using thought alone, it is even possible that part of the problem is that psychic effects are at work. Someone desperately wants some cancer cells to die when he adds a certain drug, and he wills them to die without realising it!

This is a tricky one to test for (I think we've discussed some studies on this issue), but it wold seem that even if we accept this in principle, the research would indicate that the effect is quite small at best (from what I recall, correct me if I'm wrong). I'm not sure it is possible to prevent scientists from having strong feelings about what they hope the results to be, but at the very least, one could calculate a sample size that would be of sifficient power to overcome whatever small effect might be the result of this phenomena. It is also possible that the small effects found in many parapsychological studies are actually measuring the effect of various biases. A sufficiently powered study can also help overcome that, as I understand it. Part of the solution for small biases or small psi effects seems to be the same: have studies with sufficient power to overcome the effect of small contaminants.

You can't pre-register all studies. For example, a chemist may want to make X and try reacting A and B because they could react to make X, but maybe it depends on the exact temperature, or the kind of solvent used, etc. You find these things out by trial and error, but the validity of the result depends on the characterisation of the final product using NMR, Mass Spectrometry, etc.

You're just talking about exploratory studies. No reason they can't be pre-registered. There's nothing wrong with a study that sets outs to try and bunch of stuff and see what happens. But even if some exploratory experiements are not conducive to pre-registration, the effec should be the same: we shouldn't be relying too much on exploratory studies!

The bottom line is that you need quality researchers who are prepared to work slowly (e.g.. plenty of repetitions) and treat theories as tentative for far longer than is currently the case. I also suspect that it would help to rein in the enthusiasts for modelling reality in terms of 10 dimensional spaces etc. Maths should be used to figure out the consequences of a theoretical suggestion (as was the case with Newton's laws), not to formulate theories.

David

Again, I take no issue with caution.

I'm not quite sure if you are disagreeing with me or simply highlighting different issues. I agree that good research involves taking great care and should proceed slowly, so we're on the same page there. I'm just highlighting that even the best researchers are human, and are therefore subject to error and bias. Are we on the same page that in order to produce the most reliable results even the most careful researchers should employ methodologies of the type laid out by Iaonnidis, and similar researchers? Am I right that you're not saying that sufficiently powered studies are not important, or that we should not pre-register studies?
 
Replication and reproducibility in psychology: The debate continues

Concerns that scientific results are often not reproducible is by no means limited to the field of psychology. Worries about the lack of replication in psychology, without any hard evidence as to the scale of the problem, led Brian Nosek and colleagues to launch the Reproducibility project in 2011. Last year the Open Science Collaboration published the results of this large-scale, collaborative effort to estimate the reproducibility of psychological science. The project (which conducted replications of 100 studies published in three psychology journals) reported that whilst 97 out of the 100 studies originally reported statistically significant results, only 36 % of the replications did so, with a mean effect size of around half of that reported in the original studies. Other projects have since sought to tackle the same issue, with replication rates differing considerably across projects (read more here).

So should we despair at these findings? Or take this as an opportunity to help overcome the challenges of low reproducibility? Such was the topic of the #PsycDebate, which drew an audience of psychologists, students and researchers to the replication and reproducibility in psychology event.
 
Am I right that you're not saying that sufficiently powered studies are not important, or that we should not pre-register studies?
I am saying that a lot of the problems with science are not exactly statistical.

For example, suppose we take a theory X (special relativity say) and we build a more general theory (general relativity say) on top of that, and on top of that we build yet more layers - the Standard Model and String Theory all in the space of just over 100 years.

The problem is that one of those steps can be faulty because it may turn out that several theories can explain the same data, but only one was known at the time when the decision was made. This means that all the subsequent work is faulty. However, in practice scientists feel awfully uncomfortable if you discuss possible alternatives to one of the early theories, because it would invalidate so much other work - so they tend to scoff at such work and suppress it.


Another example would be the series of steps in cosmology that are supposed to justify the use of red shifts as measuring distances - we have discussed that before. If you pile too many shaky steps on top of each other you get a house of cards, but the normal scepticism of science gets overruled by the instinct for self preservation! The house of cards becomes untouchable.

Another problem, I think, is that predictions from new theories aren't always as impressive as they look. For example GR predicts the bending of light around a star with this formula:

4GM/(c^2R)

If you get agreement (within experimental error) with this formula, that may look very impressive, but another theory would likely produce a very similar formula because of the need for the units to match up. Indeed, using Newton's law and treating the light as particles undergoing simple gravitational deflection, you get the same formula except that it has half the value (2 instead of 4)!

Indeed, here is a suggestion from one of the designers of the GPS satellites, that far from confirming GR, these satellites show that it is wrong. The video is extremely dense, and I can't follow much of it, but you can get some of the gist without following the equations:


This may mean that the chain of theories discussed above may break at the second stage - invalidating a mass of subsequent stuff!

The only answer, I think, is to develop some branches of science much more slowly - to accept that each theory is tentative for much longer and be ultra careful not to build on sand. It may also be dangerous to develop theories that are primarily only relevant to situations we can't access directly - e.g. very massive objects. The problem is that when we look into the sky all kinds of assumptions have to be made before things can be identified, and it may be possible to fit the data to all sorts of incorrect theories - as was done with the epicycle theory of cosmology.

David
 
Good podcast with Roger Penrose as a guest exploring the limits of science:

http://www.bbc.co.uk/programmes/b07gf4rw
A number of things struck me about the programme. First was how difficult the individuals found it to put their concepts into layman's language. Second, how unlike non-scientists most were. Third, how confident they were something as intractable as consciousness would yield to investigation. Fourthly, how like a priest's is the manner of Roger Penrose.
If steve001's point that our interests are the result of existential crisis has any merit, physicists as a group are in the grip of galloping narcissism. I found all their conclusions interesting until the personalities intervened, and then I wondered how much their modelling was a reflection of their own universal view. I find it hard to believe that nature or God is a boffin, and consequently that boffinry is unlikely to be the best tool kit to unpick its workings.
 
I am saying that a lot of the problems with science are not exactly statistical.

I agree, but I was trying to determine if you agree with me about the importance of sufficiently powered studies and preregistration.

For example, suppose we take a theory X (special relativity say) and we build a more general theory (general relativity say) on top of that, and on top of that we build yet more layers - the Standard Model and String Theory all in the space of just over 100 years.

The problem is that one of those steps can be faulty because it may turn out that several theories can explain the same data, but only one was known at the time when the decision was made. This means that all the subsequent work is faulty. However, in practice scientists feel awfully uncomfortable if you discuss possible alternatives to one of the early theories, because it would invalidate so much other work - so they tend to scoff at such work and suppress it.

I agree that there are always scoffers. Of course I also think there are always those looking to make their mark by discovering an error or making a big find that changes everything.

We know that it is part of the human condition to be biased and to be attached to our current views. This applies to us all. So there's little value, in my opinion, in accusing someone else of it because without a doubt the accuser suffers from it too, and likely does his or her own bit of scoffing. So let's put this aside - because I'm not sure it takes us anywhere (at least it doesn't seem to ever have, so what is the likelihood it will going forward). All these accusations do, imp, is to make people dig their heels in further.

We know that this bias can be overcome and views can change. The question is: what is the best way to achieve that (both in others and ourselves). My suggestion is to shift our focus from motive questioning and personal attacks, towards methodology designed to help us do end runs around our biases.

Another example would be the series of steps in cosmology that are supposed to justify the use of red shifts as measuring distances - we have discussed that before. If you pile too many shaky steps on top of each other you get a house of cards, but the normal scepticism of science gets overruled by the instinct for self preservation! The house of cards becomes untouchable.

I agree that scientists are as biased in favour of their prexisting view as the rest of us, but I've never seen any clear evidence that big discoveries, including those that overturn previous findings, result in scores of scientists losing their jobs, or result in less work for scientists. In fact, don't new discoveries, even those that overturn previous findings, tend to result in more science being done? Meaning more jobs for scientists?

And as much as scientists, again like the rest of us, can scoff at alternative views, is it actually true that they simply ignore them? For example, if I correctly recall out previous redshift discussion, when I looked at whether scientist had seriously looked into that theory you had accused them of ignoring, I seem to recall finding a bunch of papers giving it a serious examination. I think it's easy to forget amongst the scoffing that there is also often a lot of serious critique that is done (more the reason not to scoff, as it serves to distract from the real work).

Humans are always going to be biased. They can't be condemned for that. Rather, in evaluating whether their rejection of this or that idea is warranted we need to look closely at how they came to that opinion.

The only answer, I think, is to develop some branches of science much more slowly - to accept that each theory is tentative for much longer and be ultra careful not to build on sand.

I agree with this sentiment, but I think it's important to add some meat on the bones here. What does moving more slowly mean in practical terms? To me, this means setting certain evidentiary standards that are required before building.

This entails, I suggest refraining confidence in findings until the have been demonstrated using suitably rigorous methodology. And it may also mean going back to previous findings when we improve our understanding of what the most reliable methodologies are.

In this "replication crisis" that had been mentioned a lot on the forum recently, I'm less concerned with the fact that many experiments failed to replicate (I think that is to be expected even when things are working properly) than I was with the fact that apparently so many working scientists were willing to base further work on results that had not been reliably tested!

I suggest that it the focus on best practices that will help get new ideas accepted in the face of bias. And that that is how ideas they challenge the existing views will justifiably overcome them.
 
Special education professor advocates for steps to combat replication crisis in research

Replication of scientific findings has been a cornerstone of validating research for generations, yet it happens so infrequently that many have claimed science is in a replication crisis. A University of Kansas special education professor has co-authored a study on replication, its effects on the field and students, and suggests a more dynamic approach to research could help address the paucity of replication.

Jason Travers, assistant professor of special education, was lead author of an article examining how the lack of research replications can negatively affect special education, and he argues that single-case experimental research design can complement group experiments to address the shortage. The study is the lead article in a special issue of the journal Remedial and Special Education—edited by professors Kathleen Lane and Karrie Shogren of KU—dedicated to the replication crisis.

Like nearly all academic fields, special education researchers face several challenges that make replicating research findings a challenging prospect. The article points out several challenges such as publishers' bias of novel findings, unrealistic expectations for researchers to publish a high quantity of original studies, a bias among the public and researchers for "striking" findings, lack of research funding for replication studies and many others.
Travers co-authored the article with Bryan Cook of the University of Hawaii, William Therrien of the University of Virginia and Michael Coyne of the University of Connecticut.

While those problems are not unique to special education, they are troublesome because the field regularly develops interventions that educators will use for children with all manner of disabilities throughout the nation, Travers said. Without rigorous verification of previous findings, children could be subjected to interventions that are not truly effective or possibly even harmful.
 
Isn't replication of findings often considered to be one of the major pillars of the scientific method? If there is a lack of replication occurring, why does it seem that almost nobody cares outside of this forum?
I think you can read a lot of articles about this crisis in science, but somehow the whole circus goes on!

For example, back in 2007, the BBC devoted a whole radio program to major flaws in cancer research:

http://news.bbc.co.uk/1/hi/programmes/file_on_4/7098882.stm

A radio program in 2010 indicated that this was still a problem, and here we see reference to the same problem in 2014:

http://www.nature.com/bjc/journal/v111/n6/full/bjc2014166a.html
Problems associated with cell culture, such as cell line misidentification, contamination with mycoplasma and genotypic and phenotypic instability, are frequently ignored by the research community. With depressing regularity, scientific data have to be retracted or modified because of misidentification of cell lines. Occult contamination with microorganisms (especially mycoplasma) and phenotypic drift due to serial transfer between laboratories are frequently encountered. Whatever the nature of the cell culture operation, large or small, academic or commercial, such problems can occur. The aim of these guidelines, updated from the previous edition of 1999, subsequently published in the British Journal of Cancer (UKCCCR, 2000), is to highlight these problems and provide recommendations as to how they may be identified, avoided or, where possible, eliminated.

David
 
A number of things struck me about the programme. First was how difficult the individuals found it to put their concepts into layman's language. Second, how unlike non-scientists most were. Third, how confident they were something as intractable as consciousness would yield to investigation. Fourthly, how like a priest's is the manner of Roger Penrose.
If steve001's point that our interests are the result of existential crisis has any merit, physicists as a group are in the grip of galloping narcissism. I found all their conclusions interesting until the personalities intervened, and then I wondered how much their modelling was a reflection of their own universal view. I find it hard to believe that nature or God is a boffin, and consequently that boffinry is unlikely to be the best tool kit to unpick its workings.
I enjoyed it , but it's worth remembering that the nature of the show means that the guests always have something to promote (new book, show, etc) ;)
 
http://www.dailymail.co.uk/health/a...tients-stop-taking-vital-drug-lives-risk.html

More than 200,000 patients stopped taking statins because of fears over side-effects, experts said last night.

They estimate that as a result at least 500 lives will be lost by 2024.

500 lives lost maybe a fair price to pay for a reduction in side effects, but I've no idea how we arrive at the moral equation for this. Would 10 lives lost be acceptable? 100,000? I've no idea...

[ Edit: I've used the Daily Mail as a source here as I believe it's David's favourite newspaper ;) )
 
http://www.dailymail.co.uk/health/a...tients-stop-taking-vital-drug-lives-risk.html



500 lives lost maybe a fair price to pay for a reduction in side effects, but I've no idea how we arrive at the moral equation for this. Would 10 lives lost be acceptable? 100,000? I've no idea...

[ Edit: I've used the Daily Mail as a source here as I believe it's David's favourite newspaper ;) )
Was it due to actual side effects, or a fear of side effects? The medical literature is pretty clear that a certain proportion of patients will experience side effects, even bad ones, and it seems perfectly reasonable to stop taking them in those cases. But if it is fear of side effects that is the primary reason one decides not to take them, then perhaps they should try them out first to see if they are among the unlucky (note: this reasoning only applies if fear of side effects is the main reason for not wanting to take them, there can be other reasons someone may have for not taking them).

(My doctor hasn't brought up statins for me yet, so I haven't ever tried them myself).
 
William Reville:Fraud is now the biggest enemy of science

"Scientists are not required to subscribe to any universal code of ethics. This needs to change"

...recent announcement (April 27th 2016) by Julianna LeMieux in the American Council on Science and Health bulletin caught my eye: “Full of shite: why a fecal transplant paper was retracted”. The paper was retracted because it contained fraudulent data.

The retraction is just one eye-catching example of the type of misconduct that is plaguing science. If this misconduct isn’t confronted and eliminated, the entire scientific enterprise could crash.

Before I continue, a brief word about fecal transplants. The human gut is colonised by a large and complex mixture of micro-organisms, mostly bacteria (the average adult gut hosts about 1.2 kg of bacteria). This gut microbiome performs many vital functions for its human host, for example helping our digestion, training the immune system, producing vitamins and helping to regulate appetite.

Health problems can arise if the balance between the different types of bacteria in your gut gets disturbed. It is known that the bacterial spectrum in the gut of obese people is different to the spectrum in nonobese people. The question arises: are the bacteria in the obese gut causing the obesity or does obesity change the bacterial composition of the gut?

In the research in question, this was investigated by performing fecal transplants from obesity-prone and from obesity-resistant rats into rats that had no bacteria at all in their guts. This is now a particularly hot research area, and there is keen competition to publish papers. This work waspublished in Obesity (2014). The paper was retracted in May 2016 with the admission that one of the authors had forged data.

Misconduct in science is a huge problem. A review of the 2,047 biomedical and life science articles included in PubMed as retracted on May 3rd, 2012, revealed that only 21.3 per cent of the retractions were attributable to error. Sixty-seven per cent of retractions were attributable to misconduct, including fraud and suspected fraud (43.4 per cent), duplicate publication (14.2 per cent) and plagiarism (9.8 per cent). The percentage of scientific articles retracted because of fraud has risen tenfold since 1975 (Proceedings of the National Academy of Sciences).

Such misconduct is not confined to physics, chemistry and biology but is also widespread across the “softer” sciences such as psychology. Misconduct in psychology is reviewed by Tom Farsides and Paul Sparks in an article in the Psychologist of May 2016 arrestingly called “Buried in Bullshit”.
 
I think you can read a lot of articles about this crisis in science, but somehow the whole circus goes on!

For example, back in 2007, the BBC devoted a whole radio program to major flaws in cancer research:

http://news.bbc.co.uk/1/hi/programmes/file_on_4/7098882.stm

A radio program in 2010 indicated that this was still a problem, and here we see reference to the same problem in 2014:

http://www.nature.com/bjc/journal/v111/n6/full/bjc2014166a.html


David

http://www.dailymail.co.uk/health/a...tients-stop-taking-vital-drug-lives-risk.html

500 lives lost maybe a fair price to pay for a reduction in side effects, but I've no idea how we arrive at the moral equation for this. Would 10 lives lost be acceptable? 100,000? I've no idea...

[ Edit: I've used the Daily Mail as a source here as I believe it's David's favourite newspaper ;) )

In addition to the fraud article I just posted above, this all seems to indicate science has a long way to go to shape itself up.

I think children should even be made aware of these potential issues with fraud/bias/replication/etc so that they can better question results, which in turn would mean science journalism would also have to up its game.
 
This is the retraction study: http://www.pnas.org/content/109/42/17028.full#F1

There's no question that vigilance against fraud in science is essential, but in terms of assessing the situation regarding misconduct it is important to note the these 2000 studies are a tiny fraction of the total number of papers published during the relevant period. We're talking a fraction of a percent. Even assuming that that number underepresents the amount of fraud out there, we're still talking about a tiny percentage of the 50% of papers whose findings are false (Why Most Published Research Findings Are False).

Fraud is a problem in science, as it is in all industries, but to focus on fraud as the greatest problem facing science is to ignore the much much bigger problem that comes from methodological biases.

So by all means, let's increase our efforts to identify and discourage fraud, and we should celebrate if we start to see the percentage of misconduct retractions decline. But we should not expect to see much improvement in the reliability of scientific research as a whole without much more emphasis on methodology.
 
http://www.dailymail.co.uk/health/a...tients-stop-taking-vital-drug-lives-risk.html



500 lives lost maybe a fair price to pay for a reduction in side effects, but I've no idea how we arrive at the moral equation for this. Would 10 lives lost be acceptable? 100,000? I've no idea...

[ Edit: I've used the Daily Mail as a source here as I believe it's David's favourite newspaper ;) )

There seems to be so much statistical gamesmanship around this subject, that I doubt whether this expert guess is worth much.

However, if 500 lives would be saved by feeding ten million million people statins that means your chance of being 'saved' if you take a statin is 0.005%! Would you take a pill for the rest of your life that had such a small gain, knowing that it had potential side effects (very real in my case)?

They tightened up the rules regarding drug trials about a decade ago, so that trials had to be pre-registered (Yes, Arouet is right to push that issue for statistical research) and those trials done since then have shown smaller gains (is that even possible!), so they still use the previous trial results to justify taking the drug!

One of the underlying medical scandals, is that some drugs are prescribed on the basis of ridiculously small gain (except to the companies that make them).

Statins are supposed to work by lowering cholesterol, so why is it that people with higher cholesterol (or LDL levels, if you prefer) live longer - here is a long list of references to studies showing this effect (and generally ignored).

http://vernerwheelock.com/179-cholesterol-and-all-cause-mortality/

The Daily Mail Online offers up a lot of information that other papers avoid - it isn't big on holding to a political line. The Guardian is the exact opposite. So it you open up the Mail Online, you can read a lot of unfiltered news - for example, DM was reporting the problems caused by the migrants in Germany long before the BBC dared broach that subject. It publishes both sides in the statin/cholesterol/saturated fat debate, a variety of reasoned arguments on the climate issue,and yes, if you want, it will tell you the latest exciting news about Kim Kardashian's bottom!

David
 
Fraud is a problem in science, as it is in all industries, but to focus on fraud as the greatest problem facing science is to ignore the much much bigger problem that comes from methodological biases.
I basically agree with you, except that you still seem to think that most of the problems in science are statistical. As I acknowledged above, they are in the case of statins, but if we take that example and explore it a bit further.

If researchers ignore evidence that high cholesterol levels are associated with longer life, it isn't about the statistics as such, it is just very blinkered reasoning. Then there is the question of just how effective a drug has to be in order to be prescribed. An effect may be statistically valid, but too small to be of any real use - see my comments above. That isn't a question of statistics, so much as common sense.

Many of the problems are quite awful, such as cancer research done on contaminated cell lines. The point about such contamination is that the contaminant cells may grow faster than the intended cells, and overtake the culture completely - so a whole set of research papers are based on totally wrong research.

David
 
Back
Top