The Ganzfeld Methodology: A Joint Effort in Understanding

I pretty much agree with Kennedy--we need a properly powered confirmation study. What bothers me with our current set of researchers is that the studies are well-designed but not run for long enough to make a solid statistical basis, despite being argued from a point of statistics. When pressed on this, they usually say that there isn't enough funding (though there is apparently enough funding to keep making dozens of other underpowered pilot studies :/)
 
I pretty much agree with Kennedy--we need a properly powered confirmation study. What bothers me with our current set of researchers is that the studies are well-designed but not run for long enough to make a solid statistical basis, despite being argued from a point of statistics. When pressed on this, they usually say that there isn't enough funding (though there is apparently enough funding to keep making dozens of other underpowered pilot studies :/)
Underpowered pilot studies are much cheaper in terms of costs. It's easy to grab a few people and stick them in a room for a day or three and do some science. Much harder to keep cohesion in a much larger setting. The question is, one in which I believe you feel the answer is no, do a compilation of underpowered studies make up for one well powered study? Bear in mind that there are quality factors added to the newest MAs.
 
The question is, one in which I believe you feel the answer is no, do a compilation of underpowered studies make up for one well powered study?
That depends; is there a precedent for cobbling together under-powered studies and accepting them to be identical to a single properly powered study in other areas of academia?

If Schlitz et all. are taking the experimenter effect seriously and are testing it, then any positive results with those invalidates the Ganzfeld studies. Since the Ganzfeld is based on dozens of different experimenters, identification of an experimenter effect would highlight that they were not controlling for that.
 
That depends; is there a precedent for cobbling together under-powered studies and accepting them to be identical to a single properly powered study in other areas of academia?

If Schlitz et all. are taking the experimenter effect seriously and are testing it, then any positive results with those invalidates the Ganzfeld studies. Since the Ganzfeld is based on dozens of different experimenters, identification of an experimenter effect would highlight that they were not controlling for that.
But that would also imply that you're attempting to control psi. If psi does indeed exist, it is nearly impossible to control for.

And yes, metaanalyses are used in multiple fields to evaluate evidences. Especially if there are quality weights associated with them. Also, most ganzfeld studies are not at heterogenous as they are made out to be. Look at Kathy Daltons study; it is by far one of the largest, while also amazingly successful.

The statistical analysis portion is identical to nearly all fields of study. It doesn't make sense that we more readily accept conclusions of evidence taken from medical fields ( one of the bigger fields using MAS ) and not from parapsychology. Either the method is ineffective or it is not, ya know? If we criticize parapsychology for using MAs, we need to find fault with the MA.
 
And yes, metaanalyses are used in multiple fields to evaluate evidences.
I didn't ask if there was a precedent for meta-analyses, I asked for precedent using a meta-analysis of individually under-powered studies. (e.g. using them in this specific way.) As I understand it, you are required to meet certain effect sizes to receive government approval for a drug, and I'm not sure if you're allowed to submit a meta-analysis of tests which are independently insufficient. I'm aware that meta-analysis as a concept is mainstream.
 
Meta-analyses are most useful for individually under-powered studies, though. The goal to attempt to achieve a more accurate effect size and higher statistical power to compensate for the lower sample size. Why would one do this when individual studies are properly powered? If every studies was properly powered, it could be concievable that one could merely rely on replication success % to judge the merits of the effect.
 
In a paper that Arouet linked to at the end of page one, Kennedy mentions a paper called “A Vast Graveyard of Undead Theories” (nice name) by Ferguson and Heene which discusses a lot of the issues around meta-analysis.

“It is thus not surprising that we have seldom seen a metaanalysis resolve a controversial debate in a field. Typically, the antagonists simply decry the meta-analysis as fundamentally flawed or produce a competing meta-analysis of their own (see Twenge, Konrath, Foster, Campbell, & Bushman, 2008, vs. Trzesniewski, Donnellan, & Robins, 2008; or Gerber & Wheeler, 2009, vs. Baumeister, DeWall, & Vohs, 2009; or perhaps most famously Rind, Tromovitch, & Bauserman, 1998, vs. Dallam et al., 2001, vs. Lilienfeld, 2002). As Greenwald (2012) noted, empirical results such as meta-analyses seen as debate-ending by one side of a controversy are typically viewed as fundamentally flawed by the other. It is not our intent to take a position on any of the debates cited above— rather, we observe that the notion that meta-analyses are arbiters of data-driven debates does not appear to hold true.”

The whole paper is online here.

http://pps.sagepub.com/content/7/6/555.full.pdf html

Scientific controversies (not just parapsychology) are usually perpetuated by the assumption that contrary evidence must be incompetently derived. With this in mind, meta-analyses add a layer of methodological aspects to discuss or deride.

Certainly, greater statistical power for each study would go a long way to solving the problem. It may be that the scoring method needs to be replaced by something more sensitive? The four-choice direct hit model has persisted almost by accident: it was the most common method pre-1982 (iirc 24 out of 42, so just over half used this method).
 
I pretty much agree with Kennedy--we need a properly powered confirmation study. What bothers me with our current set of researchers is that the studies are well-designed but not run for long enough to make a solid statistical basis, despite being argued from a point of statistics. When pressed on this, they usually say that there isn't enough funding (though there is apparently enough funding to keep making dozens of other underpowered pilot studies :/)
It's frustrating to hear that so many times, but I'm not surprised. How many people out in the wide world have even heard of something like the Ganzfeld, or know anything about psychical research at all? Hard to gain support and raise money when no one knows about what you're doing.

Does anyone here have a ballpark figure on how much a properly powered confirmation study might cost?
 
I think Dean radin has a blogpost where they work out that it's not worth it to win the randi challenge even if you do manage to pry 1 million dollars out of his cold dead hands.
 
I think Dean radin has a blogpost where they work out that it's not worth it to win the randi challenge even if you do manage to pry 1 million dollars out of his cold dead hands.
I'd agree, but that's not quite what I meant. I meant, how much would it cost to carry out an experiment beyond the pilot stage?
 
I think Dean radin has a blogpost where they work out that it's not worth it to win the randi challenge even if you do manage to pry 1 million dollars out of his cold dead hands.

Randi's prize is a red herring; the confirmation study is for academia (who matters), not the JREF (who doesn't.)

I'd agree, but that's not quite what I meant. I meant, how much would it cost to carry out an experiment beyond the pilot stage?

I emailed a parapsychologist about that once; the best answer I got was that it cost about a million dollars a year to run a parapsychology unit full time. Maybe I'll try to spreadsheet out what the cost for just a confirmatory study would be some time.
 
Going by the average salary of an experimental psychologist (I had to guess what profession would be the closest to this), approximately 380 tests needed for a confirmatory study, a single Ganzfeld takes about thirty minutes to run (I modeled one test per hour, to give time for notes and changing subjects) and a 40-hour work week. At that, it takes 9.5 weeks to have all the tests conducted.

So a minimum of 15k$ for the researcher alone, that doesn't include the facility rental or the equipment to actually do it. It also assumes a really motivated and efficient researcher (I'm guessing a real world scenario isn't going to be cranking out that many tests per week on an exact time table.) More if you have research assistants (you would probably need them to keep from burning out the one guy.) If its important enough I could go ask an engineer contact what the rental would be on the faraday cages.

References

http://psychology.about.com/od/psychologycareerprofiles/p/experimental-psychologist.htm
 
I don't have access to the full paper, but here's one that seems to recommend removing underpowered studies from meta-analyses:

http://psycnet.apa.org/journals/met/3/1/23/

The authors propose that meta-analysts explicitly specify their research question and their standards for adequate studies to be included, using whatever standards they would have applied had they been asked to peer-review the individual studies for funding. Such a proposal corresponds to previous ones with regard to considerations of sampling, measurement, design, and analysis adequacy, but the authors of this study extend the proposal to the inclusion of the definition of adequate power. They show that if adequate power is defined and then used in reviewing studies for inclusion in a meta-analysis, excluding those that are by the meta-analysts' own criterion "underpowered," this strategy would go far toward removing bias due to the "file-drawer problem" and resulting misleading research conclusions. (PsycINFO Database Record (c) 2012 APA, all rights reserved)

Here's another paper looking directly at this issue: http://www.ncbi.nlm.nih.gov/pubmed/23544056

When at least two adequately powered studies are available in meta-analyses reported by Cochrane reviews, underpowered studies often contribute little information, and could be left out if a rapid review of the evidence is required. However, underpowered studies made up the entirety of the evidence in most Cochrane reviews.

If I'm reading it correctly, having two adequately powered studies in a meta-analysis trumped ALL of the underpowered studies.
 
Back
Top