More Peer-Review Failing

Iyace

Member
http://www.scilogs.com/next_regeneration/to-err-is-human-to-study-errors-is-science/

Why didn't the peer reviewers who evaluated Abramson's article catch the error prior to its publication? We can only speculate as to why such a major error was not identified by the peer reviewers. One has to bear in mind that "peer review" for academic research journals is just that - a review. In most cases, peer reviewers do not have access to the original data and cannot check the veracity or replicability of analyses and experiments. For most journals, peer review is conducted on a voluntary (unpaid) basis by two to four expert reviewers who routinely spend multiple hours analyzing the appropriateness of the experimental design, methods, presentation of results and conclusions of a submitted manuscript. The reviewers operate under the assumption that the authors of the manuscript are professional and honest in terms of how they present the data and describe their scientific methodology.
 
IMO, traditional peer review has had its day, in my view. In the Internet age, there's the possibility of open-access, free publishing with review by any interested party. Why have a system where editors and peer reviewers can act as (possibly biased) gatekeepers who determine whether or not something gets published? Why not publish everything (as long as it includes data and detailed methodology) and let open analysis and critique sort out the wheat from the chaff?

There'll always have to be some kind of filtering process, but until recently it's been at the front end, raising the opportunity for bias to exclude promising new lines of research, or include unpromising existing ones. The positive aspect is that there's less in the journals for academics to wade through, I suppose. One can at least imagine a tiered system where papers make it through to higher tiers if deemed interesting/promising/rigorous enough. At the topmost tier, the status would be considered the best work.

Anonymity is another key issue: if nobody ever knew who the author was until a certain tier had been reached, there would be a better chance of review not being based on existing bias for or against that author.
 
Why have a system where
editors and peer reviewers can act as (possibly biased) gatekeepers who
determine whether or not something gets published?

Because the Singularity has not happened yet, and any human-based system
is going to be consumed by human faults. Conscious or non-conscious
faults.

The positive aspect is
that there's less in the journals for academics to wade through, I
suppose.

I think this is the reason peer review exists to begin with; so
academics don't have to wade through thousands of pages of junk studies.

One can at least imagine
a tiered system where papers make it through to higher tiers if deemed
interesting/promising/rigorous enough.

This sounds like replacing a peer review process by just making
everything go through /r/science instead. Group rating systems are no
more effective at rooting out crap than a peer review system; probably
worse, because you could organize a skeptic or proponent block vote to
spam articles. At least with peer review the peers have degrees and some
relevance to the field at hand. Remember that one of said science
subreddits formally banned all discussion about climate change because
they considered it a closed issue and anyone who disagrees is
automatically a troll.

Forcing people to have some kind of academic criteria (to prove they
have at least worked in some relevant field of study) is not a way to
duct tape this idea to health. The more you start putting a strainer on
your /r/peer-review project the more it becomes the same as what it is
now.

Does someone have to be qualified to post their results? If someone
doesn't have to be qualified, most of the control systems you could put
in place are going to be subverted.

Base it on a rating system? Those are horribly flawed and can be easily
manipulated by whoever has the bigger PR budget.

Base it on replications? Now that is close to a better idea, because
something that has tagged replications could have a bigger tag cloud
presence and that would push studies up even further. That spawns off
another problematic question: How do we make sure you don't just spam
fake replications?

At the topmost tier, the
status would be considered the best work.

This sounds like an analogue for journal shopping. Academics already try
to pick the "most prestigious" journal first and step downward until
somebody accepts their paper. I think in this proposed example, you're
just replacing shopping for a journal with trying to rally people to
"upvote" your paper. "That's not a real journal" would be replaced with
"that isn't a gold ranked paper."

Anonymity is another key
issue: if nobody ever knew who the author was until a certain tier had
been reached, there would be a better chance of review not being based
on existing bias for or against that author.

Is there compelling evidence to suggest that peer reviewers are actually
rejecting articles based on the name of the submitters specifically? I
am familiar with discussion regarding peer reviewers acting on bias via
the contents of the paper, but not rejecting someone based purely on
their name.
 
Is there compelling evidence to suggest that peer reviewers are actually
rejecting articles based on the name of the submitters specifically? I am familiar with discussion regarding peer reviewers acting on bias via the contents of the paper, but not rejecting someone based purely on their name.

If you are known to be critical of orthodoxy, the moment your paper lands on an editor's desk, you are going to be in for a hard time. It's likely to be given special scrutiny and farmed out for review to people who are going to do their damnedest to obstruct or delay its publication. If on the other hand you're in with the orthodoxy, your paper stands a fair chance of minimal due diligence. The Climategate emails illustrate this with too many examples to quote.

I'm unclear: Do you think there's nothing better than current peer-review practices?
 
Back
Top