Scientists unknowingly tweak experiments

Saiko

Member
http://phys.org/news/2015-03-scientists-unknowingly-tweak.html?utm_source=menu

A new study has found some scientists are unknowingly tweaking experiments and analysis methods to increase their chances of getting results that are easily published.


The study conducted by ANU scientists is the most comprehensive investigation into a type of publication bias called p-hacking.

P-hacking happens when researchers either consciously or unconsciously analyse their data multiple times or in multiple ways until they get a desired result. If p-hacking is common, the exaggerated results could lead to misleading conclusions, even when evidence comes from multiple studies.

"We found evidence that p-hacking is happening throughout the life sciences," said lead author Dr Megan Head from the ANU Research School of Biology.

The study used text mining to extract p-values - a number that indicates how likely it is that a result occurs by chance - from more than 100,000 research papers published around the world, spanning many scientific disciplines, including medicine, biology and psychology.

"Many researchers are not aware that certain methods could make some results seem more important than they are. They are just genuinely excited about finding something new and interesting," Dr Head said.

"I think that pressure to publish is one factor driving this bias. As scientists we are judged by how many publications we have and the quality of the scientific journals they go in.

"Journals, especially the top journals, are more likely to publish experiments with new, interesting results, creating incentive to produce results on demand."

Dr Head said the study found a high number of p-values that were only just over the traditional threshold that most scientists call statistically significant.

"This suggests that some scientists adjust their experimental design, datasets or statistical methods until they get a result that crosses the significance threshold," she said.

"They might look at their results before an experiment is finished, or explore their data with lots of different statistical methods, without realising that this can lead to bias."

The concern with p-hacking is that it could get in the way of forming accurate scientific conclusions, even when scientists review the evidence by combining results from multiple studies.

For example, if some studies show a particular drug is effective in treating hypertension, but other studies find it is not effective, scientists would analyse all the data to reach an overall conclusion. But if enough results have been p-hacked, the drug would look more effective than it is.

"We looked at the likelihood of this bias occurring in our own specialty, evolutionary biology, and although p-hacking was happening it wasn't common enough to drastically alter general conclusions that could be made from the research," she said.

"But greater awareness of p-hacking and its dangers is important because the implications of p-hacking may be different depending on the question you are asking."

The research is published in PLOS Biology.
 
But how do we know if the study by ANU scientists is not itself subject to unconscious p-hacking? :eek:
That's a mind bender! ;)
Exactly the point! ;;/? There is no standard mode way to know to what extent data and conclusions in any study are influenced by the researchers beliefs.
 
Exactly the point! ;;/? There is no standard mode way to know to what extent data and conclusions in any study are influenced by the researchers beliefs.
Sure there is - use validated outcomes, pre-register the details of the study, perform confirmatory experiments, make the data freely available, blinded analyses, etc.

Linda
 
Sure there is - use validated outcomes, pre-register the details of the study, perform confirmatory experiments, make the data freely available, blinded analyses, etc.

Linda
Your response is more evidence that there isn't. It's quite in keeping with the opinions you express regularly.

And it's a fail. :) Validated outcomes, other experiments, etc are just as susceptible to underlying bias. The belief that "there's ways to avoid or mitigate beliefs" is strong. prevalent and erroneous.
 
Your response is more evidence that there isn't. It's quite in keeping with the opinions you express regularly.

And it's a fail. :) Validated outcomes, other experiments, etc are just as susceptible to underlying bias. The belief that "there's ways to avoid or mitigate beliefs" is strong. prevalent and erroneous.
I'm curious as to your position... Are you really saying that we shouldn't even try and limit the effects of bias?
 
Validated outcomes, other experiments, etc are just as susceptible to underlying bias.

No they aren't. We can look at the specific issues mentioned in the article and see that the measures I mentioned would mitigate those problems.

"This suggests that some scientists adjust their experimental design, datasets or statistical methods until they get a result that crosses the significance threshold."

Pre-registering the specifics of a study means that there is little to no flexibility in terms of experimental design or statistical methods to "adjust". Having the data stored with a third-party and publicly available means that the trick of using partial data sets will be exposed. The use of validated outcomes means that there won't be an opportunity to substitute a different, non-valid outcome which happens to be "statistically significant" as a p-hack.

Linda
 
I'm curious as to your position... Are you really saying that we shouldn't even try and limit the effects of bias?
That wasn't on my mind in any of my posts in this thread but hmmmmm . . .

I might advocate such but it would mean a wholesale shift in the approach taken by researchers. That likely won't happen very soon. I see bias as part of parcel as what we are. I do not see that there is anything that we perceive free of influence by underlying beliefs. And the ways people go about attempting to limit are themselves influenced by beliefs.

That said, as things stand in the main, I do support attempts to limit the effects of bias. However. I do hold that we don't know the outcome of such attempts.
 
No they aren't.
Yes they are. That you believe they aren't won't change that. Much as you may cling to the idea there is no way any person, or method developed by people, will eliminate or severely limit the influence of beliefs.

I won't address your specific example because no matter what you raise, I can point out many ways that underlying beliefs will still influence.
 
That wasn't on my mind in any of my posts in this thread but hmmmmm . . .

I might advocate such but it would mean a wholesale shift in the approach taken by researchers. That likely won't happen very soon. I see bias as part of parcel as what we are. I do not see that there is anything that we perceive free of influence by underlying beliefs. And the ways people go about attempting to limit are themselves influenced by beliefs.

That said, as things stand in the main, I do support attempts to limit the effects of bias. However. I do hold that we don't know the outcome of such attempts.

There has been much research looking at the outcome of attempting to limit the effects of bias, so we do actually have some idea of the outcome of our efforts.

This chapter outlines some of the research on limiting the effects of bias (starting on page 8.27). One example of this kind of research is Pildal 2007 (from this chapter) which shows that limiting the effects of allocation bias leads to an 18% difference in estimated effect size.

http://hiv.cochrane.org/sites/hiv.cochrane.org/files/uploads/Ch08_Bias.pdf
http://www.ncbi.nlm.nih.gov/pubmed/17517809

Linda
 
I won't address your specific example because no matter what you raise, I can point out many ways that underlying beliefs will still influence.

That's the sort of thing which would be useful and interesting. Can you give me a specific example of a way in which an underlying belief could overcome the measures I specified? For example, if you specified that you are measuring mortality and you are blind as to which person is taking a drug and which is not, how does your belief that the drug is helpful change who dies and who does not die in a way that gets you a "positive" result?

Linda
 
There has been much research looking at the outcome of attempting to limit the effects of bias, so we do actually have some idea of the outcome of our efforts.
:) You're really not getting the point here are you?
 
For example, if you specified that you are measuring mortality and you are blind as to which person is taking a drug and which is not, how does your belief that the drug is helpful change who dies and who does not die in a way that gets you a "positive" result?

I find it more interesting to use the clear influences of beliefs in your post.
- You believe it is possible to be "blind" IOW if a person doesn't know via intellectual means then they don't know.
- You assume that "underlying beliefs" refers to opinions.
- You assume that it is possible to objectively know all or most of the"underlying beliefs" that may come into play.

Beyond that you have shifted the point - the point is not who dies or doesn't. It's all the other data of why/how/what gleaned from that and the conclusions derived from the data.
 
:) You're really not getting the point here are you?
I'm addressing what was written in the article you quoted. What do you think the point is?

Did you read the research article from the OP? They say much the same thing. Did they also miss the point of the research they performed?

Linda
 
I find it more interesting to use the clear influences of beliefs in your post.

Excellent. That's helpful.

- You believe it is possible to be "blind" IOW if a person doesn't know via intellectual means then they don't know.

This isn't a "belief" (or hope or wish or guess). It's what we find when we perform experiments. We can give people full knowledge. And we can take measures which seem to limit knowledge. And we can see what effect that has on the result. The paper I linked earlier was an example of this kind of experiment.

- You assume that "underlying beliefs" refers to opinions.

Not sure what you are getting at there. Realistically, "underlying beliefs" is just a catch-all for anything which gives someone an interest in the results - hopes and desires, greed and avarice, fear and loathing.

- You assume that it is possible to objectively know all or most of the"underlying beliefs" that may come into play.

Not in the least. Pretty much the opposite. Because there is no hope of objectively knowing all or most of the "underlying beliefs" (or what to do about it if you did), the more useful approach is to limit the ways in which any beliefs can affect the results. And direct comparisons of the results in the presence and absence of those limits tells us whether they have an effect.

Beyond that you have shifted the point - the point is not who dies or doesn't. It's all the other data of why/how/what gleaned from that and the conclusions derived from the data.

Right, conclusions are drawn from who dies or doesn't. It's a given that the conclusions drawn will depend upon your beliefs, so that is beside the point. The point is to perform research where the conclusions are constrained - the design, implementation and analysis only allow for a single conclusion. And that the same conclusion would be drawn by your enemies as by your supporters.

Linda
 
Back
Top