Why is paranormal research ignored?

I agree, but I also suspect there's a chilling effect due to the fear of being seen as suspect by their peers if they indulge in that "woo stuff." Or worse yet, by their superiors, and funders.

Pat

I agree. Peers and funders would have more of an effect.

Linda
 
I find that very hard to believe. The last time I looked Scientists were human beings... nobody wants to have their professional credibility questioned and slanderous comments made about them to the point they can no longer get a job.

I agree. I'm just pointing out that with respect to scientists getting jobs and professional credibility, science as entertainment (i.e. TED) and non-academic information sources (i.e. Wikipedia) are basically ignored in favour of the scientific arena. The stuff which goes on at Wikipedia or TED is driven by the scientific arena, not the other way around. If you want to get rid of Wikipedia and TED controversies, then get other scientists interested by presenting strong evidence, and the rest will eventually melt away.

I get the impression that parapsychology research has been gaining a bit of traction over the last decade or so in terms of credibility. I don't know if my impression is false, but if it's not, I would propose that this is because some of methods and implementation have improved slightly over that time period. There is more standardization of the ganzfeld and the mediumship studies (although still much opportunity in both for flexibility in outcome reporting*), for example. The prospective cohort study is described as the "gold standard" for NDE research, and it seems to be developing as the norm for people undertaking research in that area, as another example.

Linda

*Flexibility in outcome reporting means that any one, of several different measure, can be reported as the main outcome (including multiple ways of performing analyses). For example, in the ganzfeld studies, matches can be made on the basis of how many elements from the mentation record correspond to the targets (it's a "hit" if the target has the most corresponding elements) or on the basis of overall impression, and these matches can be made by independent judges or by the recipient or by the experimenters. Any one of those could be chosen to represent the "hit rate".
 
I think its important to highlight that the deviation from chance does exist, it's just not very wide.

I think the evolution of the Ganzfeld are the EEG studies. Both are slow-burning and costly experiments to set up, but they seem to be producing consistent effects.

What do you mean by the "EEG studies"?

Linda
 
If you want to get rid of Wikipedia and TED controversies

Well, no. Wikipedia controversies happen in topics where the facts are well known, there are political activists and intelligence agencies tasked to disrupting it despite that.

get other scientists interested by presenting strong evidence, and the rest will eventually melt away.

Yup. I make a point to highlight this when its relevant.

I get the impression that parapsychology research has been gaining a bit of traction over the last decade or so in terms of credibility. I don't know if my impression is false, but if it's not, I would propose that this is because some of methods and implementation have improved slightly over that time period.

I don't think that impression is wrong. We went from having nothing but ad-hoc investigators and loose-leaf debunkers to having the SPR fire up a journal and start making standards, then we started getting branches off like the ASPR (though I think they are mothballed) and there are probably six or more journals for parapsychologists to report findings in research now. A tiny handful of experiments are even beginning to produce data that could help in identifying mechanisms, which is pretty much the checkmate card for the entire field being vindicated.

Those who want to fund it (such as through their wills) have started doing so through either alternative medicine groups, or by directly giving the money to an agency such as the SPR. Playfair wrote an article published by about.com discussing how for a good number of years the bequeaths to universities which were meant to pay for parapsychology experiments were being embezzled (technically illegal, but who's pressing charges?)

Flexibility in outcome reporting means that any one, of several different measure, can be reported as the main outcome (including multiple ways of performing analyses). For example, in the ganzfeld studies, matches can be made on the basis of how many elements from the mentation record correspond to the targets (it's a "hit" if the target has the most corresponding elements) or on the basis of overall impression, and these matches can be made by independent judges or by the recipient or by the experimenters. Any one of those could be chosen to represent the "hit rate".

This is just going to happen, as I understand acadamia. As long as meta-analyses take in to account the scoring system as part of their tests for heterogeneity I don't see the problem.

I think the favored means for the Ganzfeld now is just whether the receiver picks the correct target, because its the least subjective. I have no idea if my thought is correct because I don't know of any recent studies on this which were done after the last meta-analysis correction.
 
Yeah, I'm thinking that the ganzfeld studies would the easiest to focus on to produce decent evidence. Use selected subjects, re-work the target presentation and analysis to address the statistical/stacking issues, standardize and register the outcome measure, and use well-powered sample sizes (pre-registered). If you can reliably produce an effect under those conditions, and this can be reproduced (i.e. non-parapsychologists are able to get similar results), the idea will gain validity and will spread.

I'm not sure about the presentiment studies, although I agree that there is potential. There is substantially more selective/flexible outcome reporting among this group of studies, so there is a much greater risk that the purported 'effect' simply reflects a statistical bias. Plus, a way to deal with expectation bias hasn't been identified. Again, it wouldn't hurt to clean up the flexibility in design and implementation, in order to see if the results are reproducible under stronger conditions. I'm just more doubtful that anything will remain.

Linda
 
What do you mean by the "EEG studies"?

Moulton and Kosslyn (2008) used an fMRI on 16 subjects when trying different psi activities; their only hit coincided with anomalous activity in the resultant fMRI which was written off because it was only a one-in-sixteen success. However, other similar studies were performed which used fMRI and EEG sensors and also collected a number of hits which came with effects equivalent to what Moulton and Kosslyn received on their one success.

SPR has a link to Charman's discussions of these other events (published in various places, but the PDFs are available at): http://www.spr.ac.uk/main/news/esp-and-neural-activity-eeg-and-mri-studies

Krippner's "Mysterious Minds" also covers the Moulton and Kosslyn event, citing Braud and Braud (1973), Radin (2008), Green and Thorpe (1993), Neppe (1982b), Logethetis (2003), Bierman (2000), Radin (2006), Bierman and Scholte (2002), Richards Kozak et all (2005), Standish et all (2003), and Achterberg et all (2005) as related cases.

Persinger has also performed some successful telepathy experiments using brain scans in recent years.

I am in the (slow) process of tracking down these papers for reading to see how related they are.
 
I don't think that impression is wrong. We went from having nothing but ad-hoc investigators and loose-leaf debunkers to having the SPR fire up a journal and start making standards, then we started getting branches off like the ASPR (though I think they are mothballed) and there are probably six or more journals for parapsychologists to report findings in research now. A tiny handful of experiments are even beginning to produce data that could help in identifying mechanisms, which is pretty much the checkmate card for the entire field being vindicated.

Those who want to fund it (such as through their wills) have started doing so through either alternative medicine groups, or by directly giving the money to an agency such as the SPR. Playfair wrote an article published by about.com discussing how for a good number of years the bequeaths to universities which were meant to pay for parapsychology experiments were being embezzled (technically illegal, but who's pressing charges?)

Thanks for this. I was aware that sometimes the funds get hi-jacked and have spoken out against this.

This is just going to happen, as I understand acadamia. As long as meta-analyses take in to account the scoring system as part of their tests for heterogeneity I don't see the problem.

This isn't really good enough. Tests for hetereogeneity are too insensitive to address this. A better method is to address it up-front (pick one outcome and pre-register the study).

I think the favored means for the Ganzfeld now is just whether the receiver picks the correct target, because its the least subjective. I have no idea if my thought is correct because I don't know of any recent studies on this which were done after the last meta-analysis correction.

I agree, although I think 'overall impression' from the receiver has the most validity, and it is that (rather than "least subjective") which makes it ideal. Also, work by Wackermann and others suggests that verbalizing the ganzfeld mentation may be unrelated to the target choice (and may actually interfere with it), which gives another reason to do away with bringing in elements from the recordings of ganzfeld sessions.

Linda
 
Moulton and Kosslyn (2008) used an fMRI on 16 subjects when trying different psi activities; their only hit coincided with anomalous activity in the resultant fMRI which was written off because it was only a one-in-sixteen success. However, other similar studies were performed which used fMRI and EEG sensors and also collected a number of hits which came with effects equivalent to what Moulton and Kosslyn received on their one success.

SPR has a link to Charman's discussions of these other events (published in various places, but the PDFs are available at): http://www.spr.ac.uk/main/news/esp-and-neural-activity-eeg-and-mri-studies

Krippner's "Mysterious Minds" also covers the Moulton and Kosslyn event, citing Braud and Braud (1973), Radin (2008), Green and Thorpe (1993), Neppe (1982b), Logethetis (2003), Bierman (2000), Radin (2006), Bierman and Scholte (2002), Richards Kozak et all (2005), Standish et all (2003), and Achterberg et all (2005) as related cases.

Persinger has also performed some successful telepathy experiments using brain scans in recent years.

I am in the (slow) process of tracking down these papers for reading to see how related they are.

Thank you.

Linda
 
This isn't really good enough. Tests for hetereogeneity are too insensitive to address this. A better method is to address it up-front (pick one outcome and pre-register the study).

By testing for heterogeneity, I meant picking which of the outcomes was going to be meta-analyzed and including only studies which use the same mechanism. Mixing up what counts as a hit sounds nebulous, and I know almost nothing about experiment design.

I agree, although I think 'overall impression' from the receiver has the most validity, and it is that (rather than "least subjective") which makes it ideal. Also, work by Wackermann and others suggests that verbalizing the ganzfeld mentation may be unrelated to the target choice (and may actually interfere with it), which gives another reason to do away with bringing in elements from the recordings of ganzfeld sessions.

I think the hope with transmitting the mentation is that the sender can "adjust" how they are transmitting the message based on this result. I get how that could work, but then using people off the street who wouldn't know what they were supposed to adjust defeats the point. Maybe if nothing else, it makes them feel like they are contributing so they stay enthused? Boredom and disbelief "appear" to sour the results across the papers I've seen.
 
Last edited:
In my opinion the sad situation where parapsychology is now in has come because parapsychologists have ignored spontaneous phenomena and poltergeists nearly totally. There the effects have often been everything else but weak. Parapsychology has lost the interest of general public because the usually minimal effects in laboratory are too far from everyday life. The scientific world ought to be interested in the results in laboratory, but that has not happened.

You are right. Remarkably enough, on the 26th of October I listened to a lecture by a well-known Dutch philosopher of science as well as researcher of the paranormal. His main message: it is senseless to go on as it happens now. One should do more on studying the real experiences, and stay away from the perpetual "it is! - no it is not!" discussions that have been spoiling the debates between researchers of the paranormal on the one hand, and the skeptics on the other hand. Those debates have led to nothing.
 
The simple answer is that because it is too weak to really capture the interest of most researchers. I think parapsychologists could improve the situation (in light of limited resources) by focussing on strengthening the research in one or a few areas. With interest comes more resources.

Linda
Do you really mean that the Ganzfeld experiment that averages over 30% hits, when it should average 25% hits is too weak to be of interest!

The real problem is perfectly obvious. If a researcher decides to test a ψ experiment, she knows there are (roughly speaking) two possible outcomes. If she gets a null result, she can publish it, but it won't really do her much good. If on the other hand she gets a positive result, she can publish it at her peril. Everyone knows the way that a man like Rupert Sheldrake is scorned for his work. Very few people want to take their career down that road.

Many experiments would be worth exploring even from a purely materialist standpoint, because if, for instance Dean Radin's presentiment is the result of some form of error, that error probably contaminates other, more conventional work that uses a similar setup. The problem is - what if Dean Radin is right? Hardly anyone wants to confirm his result!

David
 
JCearley,

Picking one outcome to meta-analyze based on which studies report that outcome doesn't work to avoid statistical biases. These studies will be a self-selected sample on the basis of that outcome, rather than any sort of representative sample. Instead, if you don't give the researchers any choice in what outcome to report, you get all-comers (not a selected sample of those where that happened to be the outcome which returned a 'statistically significant' result).

Linda
 
Do you really mean that the Ganzfeld experiment that averages over 30% hits, when it should average 25% hits is too weak to be of interest!

As I mused, if it takes thirty minutes of mentation to score only 5% better than if I threw a dart at the paper I would just throw the dart at the paper.

If, on the other hand one focused on the test groups which scored significantly higher than the baseline Ganzfelds I would become interested again. Spending 30 minutes of mentation to get 10-20% better odds on something begins to sound worth it, and there was one report that I recall actually obtained a 40% success rate on a large sample by selecting creative subjects.

If on the other hand she gets a positive result, she can publish it at her peril. Everyone knows the way that a man like Rupert Sheldrake is scorned for his work.

Prisoners dilemma. If one academic talks they can be ousted, if twenty academics talk then someone realizes they can't really throw out half the specialists and have to begrudgingly tolerate it. Unfortunately this will only happen if the "prisoners" decide they value truth more than vanity.
 
Do you really mean that the Ganzfeld experiment that averages over 30% hits, when it should average 25% hits is too weak to be of interest!

Yes, because as it stands, the risk that bias accounts for those results is too large to have much (if any) confidence in an effect. Much, much larger effect sizes than the one you mention have been discovered to be due to bias, once the research has been tightened up, so scientists tend to wait to see what experiments with good validity and reliability show before they get interested.

Please note that by "bias" I mean "methodological, implementation, and analytic issues which may produce false results", and not "prejudice". I wrote a detailed post on the other forum outlining which of those issues were relevant to the ganzfeld experiments. I saved a copy if you are interested in those details.

Linda
 
I'm starting to think we need a The Ganzfelds: Redux thread to prevent this one from getting crushed under discussion of that experiment.
 
I'm starting to think we need a The Ganzfelds: Redux thread to prevent this one from getting crushed under discussion of that experiment.

I dunno. If you take the position that it's already perfect and that the only thing holding parapsychology back is a psi-taboo, then a discussion about how to strengthen the methods/implementation/analysis can just be ignored, can't it?

Linda
 
I'm starting to think we need a The Ganzfelds: Redux thread to prevent this one from getting crushed under discussion of that experiment.

This is one of the things I really wanted to focus on with the research methodology thread : to really look closely at the methodology of all theses experiments to see whether they should be considered to be high risk of bias or not. If the case is that the risk of bias is high, then wouldn't scientists be justified in not relying all that much on what's been produced to date? I don't think we can answer the question in the OP without taking a very close look at these methodological issues.
 
I don't think we can answer the question in the OP without taking a very close look at these methodological issues.

But the majority of people who ignore this work don't take a close look at the methodological issues, so the question in the OP is quite unrelated to the ganzfeld specifically. Why is there this taboo against psi research?
 
Back
Top