Parapsychology: Science or Pseudoscience?

I'm raising issues that were put forth by parapsychologist J.E. Kennedy (an expert in the field) and written in the "Journal of Parapsychology." Also, Susan Blackmore (who was a parapsychologist and an expert in the field) has raised some of the same issues to argue that parapsychology is not a science.

By the way, I believe in the reality of psi phenomena. But I'm questioning the legitimacy of parapsychology as a science.
Kennedy is not raising questions about the legitimacy of the existing studies, as far as I know, he is only bringing up a well known point that psi is not particularly easy to tame. He has merely restated a known feature of psychic ability.

How you have somehow spun this into a critique on parapsychology as a whole, I have no idea.

Blackmore is not a legitimate parapsychologist. She did a few poorly done studies as a student for about two years, got positive results from them but this was only discovered many years later. Like Wiseman, she screwed up the statistics. She's basically a CSI media attention seeker who has nothing of interest to say.
 
And in the Ganzfeld for example, this is done. So again, how is it a criticism?

It would be nice to see something quantitative from Blackmore rather than some vague criticism.
Yeah, Blackmore is rather behind the times. Mere double-blind experimental design doesn't control for these effects.

http://hiv.cochrane.org/sites/hiv.cochrane.org/files/uploads/Ch08_Bias.pdf
http://journals.plos.org/plosmedicine/article?id=10.1371/journal.pmed.0020124
http://pss.sagepub.com/content/22/11/1359.short?rss=1&ssource=mfr
http://www.stat.columbia.edu/~gelman/research/unpublished/p_hacking.pdf
http://download.springer.com/static...30327af5766049a77497d629be77c54aafc9893d71a58

Linda
 
Regarding the last paper linked, publication bias is not even a possible explanation of the Ganzfeld database, let alone a plausible one.
I'm not sure what you mean. Not all the ganzfeld trials performed get reported or get reported in their original form or get included in the ganzfeld meta-analyses, and this has introduced a bias into the database (this is old news).
http://www.skeptiko-forum.com/threa...-drawer-effect-comments-here.2304/#post-69203).

I don't know what that means with respect to "explaining the Ganzfeld database". That the results can be discounted out of hand as merely a measure of this particular bias? I doubt that.

Linda
 
I'm not sure what you mean. Not all the ganzfeld trials performed get reported or get reported in their original form or get included in the ganzfeld meta-analyses, and this has introduced a bias into the database (this is old news).
http://www.skeptiko-forum.com/threa...-drawer-effect-comments-here.2304/#post-69203).

I don't know what that means with respect to "explaining the Ganzfeld database". That the results can be discounted out of hand as merely a measure of this particular bias? I doubt that.

Linda

Some small number of unpublished studies would not explain the effect seen in the database. An impossible number of unpublished reports would be needed. This is no plausible explanation.
 
Some small number of unpublished studies would not explain the effect seen in the database. An impossible number of unpublished reports would be needed. This is no plausible explanation.
"Impossible" seems a bit much (theoretically, at most you need the same number or less, than the published reports).

But like I said, I doubt that this bias is the only effect in the ganzfeld database.

Linda
 
"Impossible" seems a bit much (theoretically, at most you need the same number or less, than the published reports).

But like I said, I doubt that this bias is the only effect in the ganzfeld database.

Linda

The number was 15:1 not 1:1. Convention is greater than 5:1 means publication bias is not a reasonable conclusion. Even Ray Hyman said the file drawer effect cannot explain the effect seen.

It is impossible because if you look at how long the ganzfeld experiments take, there literally wasn't enough time for this to occur. That's why it is impossible. Not even mentioned the fact that parapsychology doesn't have anywhere close to the budget for this.
 
Regarding the last paper linked, publication bias is not even a possible explanation of the Ganzfeld database, let alone a plausible one.

Hopefully, you are saying that only because you have not read the paper. If you read Francis's description of what he considers "publication bias," then I think you would have to concede that "publication bias" is a plausible explanation for the non-null findings in ganzfeld experiments. Indeed, I think it the most plausible explanation.
 
The number was 15:1 not 1:1. Convention is greater than 5:1 means publication bias is not a reasonable conclusion. Even Ray Hyman said the file drawer effect cannot explain the effect seen.

It is impossible because if you look at how long the ganzfeld experiments take, there literally wasn't enough time for this to occur. That's why it is impossible. Not even mentioned the fact that parapsychology doesn't have anywhere close to the budget for this.
What number was 15:1? You're not talking about the "fail safe-N" are you? It assumes that the studies which aren't reported are unselected, which turns out to be wrong. Once you take selection models into account, "apparently significant, but actually spurious, results can arise from publication bias, with only a modest number of unpublished studies".

http://www3.nd.edu/~ghaeffel/Theories_Ferguson.pdf

Linda
 
Hopefully, you are saying that only because you have not read the paper. If you read Francis's description of what he considers "publication bias," then I think you would have to concede that "publication bias" is a plausible explanation for the non-null findings in ganzfeld experiments. Indeed, I think it the most plausible explanation.

I have a headache, so I may be missing something obvious (apart from the fact that I shouldn't be looking at a monitor if I have a headache) but Francis says "Evidence for publication bias in a set of experiments can be found when the observed number of rejections of the null hypothesis exceeds the expected number of rejections" and he uses Bem's precognitive habituation work as an example: 9 replications out of 10 when the effect being described as being small would suggest that not all the data is being presented.

But the Ganzfeld has nothing like that kind of hit rate. Depending on where you look, the percentage of ganzfeld experiments that reject the null (at p<0.05) is between 22-27%.

Just to clarify, 22% comes from my database (38 replications out of 169) while 27% comes from Storm, Tressoldi et al's database (30 replications out of 109 experiments).
 
Back
Top