Discussion between myself and Scott Alexander

Johann

New
Three days ago I wrote a long commentary on this very interesting blog post, written by Scott Alexander, pertaining to Bem's recent meta-analysis. Scott's opinion typifies the reaction I have seen, in the social science fields, to Bem's results—especially to the fact that his original experiments have now been substantially replicated. The discerning reader will notice that back when Ritchie, Wiseman & French (2012) conducted their small null meta-analysis, the talk was all about "the curse of the failed replications". Stephen Novella wrote that one important solution to the "researcher degrees of freedom" problem was replication. Why? I'm guessing it's because replication seemed to be the doom of Bem, at the time.

Now, however, Bem's positive results have cast that tenet of good science out the window—or, rather, the skeptics seem to have done that. Scott uses the principle of conservation of expected evidence to argue that, since science is so broken now, Bem could just as easily produce another meta-analysis with this degree of evidence, given enough time. Notice the extraordinary difference between this attitude and Novella's attitude: we can just assume Bem's research will be replicated! It's no longer even surprising that psi experiments replicate because parapsychology is the control group of science, according to these skeptics; it's robust results serve, overall, only to prove the ubiquity of methodological and statistical flaws in science.

Anyway, I made a number of observations on his piece and he graciously responded, so I thought it might interest members of this board to read the exchange. There's much to commend in his approach; in particular, I applaud his honesty in attributing to parapsychological research a high standard of quality, lamenting the fact that psi research seems to be ahead of his own social field in terms of its safeguards against biases and its policies for the publication of null results. His drive to improve science is also laudable, IMO, and a great positive consequence—even if not for all the right reasons—of Bem's studies on academia. But I question the very considerable assumption behind his approach; the white elephant in the room; and postulate that perhaps we should take the psi hypothesis seriously, in addition to questioning the scientific method.

I would be interested to see if there is any skeptic on this board who disagrees with the main thrust of my conclusions, as represented on Scott's blog.
 
Without commenting on the statistical technicalities involved, I found those two "control group" articles frustrating. To me, they both seemed to ultimately reduce to:

"We know psi doesn't exist."
"Well, what about this evidence?"
"That's not evidence for psi because we know it doesn't exist."

Ps. Not entirely correct I know, as one did indicate they'd believe it given X, but their commentary left me with the above impression despite that.
 
Last edited:
That's a very interesting discussion, Johann. Scott's article shows that he doesn't believein psi, but I note that Scott thinks there'd be an evolutionary advantage for a form of psi that can "see" into the near future (his comment posted at May 1, 2014 at 1:07 am) which is just what Bem's meta-analysis and Radin's presentiment work seem to show.

btw, I disagree with Scott's choice of putting Meta-analyses on top of a Scientific Pyramid. In all my readings on the subject, the findings of an m-a are always trumped by a large scale replication.
 
I'm not familiar with Scott Alexander (I realize that's a psuedonym). I read a few of his blog posts. He makes some mistakes that suggest he is coming at this from a social science field, like psychology? He doesn't seem to be all that familiar with evidence-based practices.

The top of the evidence pyramid would be systematic reviews of only high-quality (including size) confirmatory studies, not meta-analyses of this type. I do agree that it should not be at all difficult to continue to put together MA's of this type for parapsychology, or any field you care to mention. One of my favourite satirical articles was an offer from two of the founding fathers of evidence-based medicine to prostitute themselves in order to put together a package of positive results for any interested parties.

I agree that it is amusing to watch people squirm when Bem offers up results which satisfy the conditions which they claimed would serve as proof. :)

Linda
 
One again we find the skeptics retreating to the safety of the Bayesian analysis so that they can set their priors to "Kill" and chase away the scary psi.

The assumptions they use to justify this are pure opinion and no statistical analysis should begin and end with assumptions based solely on opinion. Ideally, you would use the real world as your guide to establish the probability of psi, not some narrow academic opinion. And in the real world, people relate their psi experiences all the time and there are whole cultures and sub cultures built up around it. Therefore the most likely outcome will be that billions of people are not deluded and that psi exists. And the priors should reflect this.
 
btw, I disagree with Scott's choice of putting Meta-analyses on top of a Scientific Pyramid. In all my readings on the subject, the findings of an m-a are always trumped by a large scale replication.

How do you determine "trumped" though?

In this excellent primer on how meta-analysts and meta-analysis in general has responded to criticisms (which I recommend to people here, as a sort of counterbalance to the critiques frequently leveled at MAs), it is noted that large-scale randomized trials obtain results differing from each other with the same frequency that meta-analyses do: a third of the time. Personally, I think well-conducted meta-analyses and well-conducted large-scale replications occupy a similar position on the evidential ladder, with different strengths.

For example, I would contend that a highly significant meta-analysis and a highly significant large-scale replication fare about as well (and the empirical evidence supports this) on informing us (1) whether an effect exists. For large trials, however, it is easier to see whether the methodology implemented was stringent and convincing, and so this logically implies that the summary measure of such experiments provides better a priori evidence for (2) the veracity of an effect than the summary measure of meta-analyses. But meta-analysis is immeasurably superior when it comes to identifying (3) the nuances of an effect; for example, under what conditions it replicates, moderating variables, heterogeneity, how quality is related to outcomes, whether typical predictions are born out, etc—all of which help us form conclusions about (2) the veracity of an effect as well or better than the summary measure alone.

I would put meta-analysis and narrative reviews of the highest-powered studies at the top of the evidence ladder. Ideally, IMO, every MA should include a narrative review of its highest powered studies, but this doesn't seem to be standard practice.

EDIT: Interesting to note: Borstein and Rothstein, in the linked test, call meta-analysis a kind of systematic review. I agree.
 
Last edited:
A systematic review refers to gathering together all the relevant studies in a systematic manner. The results can then be summarized in a narrative fashion or a numerical fashion (i.e. meta-analysis). Studies can be summarized as a meta-analysis without having been collected through systematic review (this seems to apply to many of the ganzfeld meta-analyses). The critical step in the process (in terms of risk of bias and validity of the summary) is the systematic review part.

Linda
 
The whole experimenter effect seems pretty interesting, with the bold making me laugh (the "nominative determinism" line was funny too):

Schiltz is a psi believer whose staring experiments had consistently supported the presence of a psychic phenomenon. Wiseman, in accordance with nominative determinism is a psi skeptic whose staring experiments keep showing nothing and disproving psi. Since they were apparently the only two people in all of parapsychology with a smidgen of curiosity or rationalist virtue, they decided to team up and figure out why they kept getting such different results...

..The results? Schlitz’s trials found strong evidence of psychic powers, Wiseman’s trials found no evidence whatsoever.

Take a second to reflect on how this makes no sense. Two experimenters in the same laboratory, using the same apparatus, having no contact with the subjects except to introduce themselves and flip a few switches – and whether one or the other was there that day completely altered the result. For a good time, watch the gymnastics they have to do to in the paper to make this sound sufficiently sensical to even get published. This is the only journal article I’ve ever read where, in the part of the Discussion section where you’re supposed to propose possible reasons for your findings, both authors suggest maybe their co-author hacked into the computer and altered the results.
 
A systematic review refers to gathering together all the relevant studies in a systematic manner. The results can then be summarized in a narrative fashion or a numerical fashion (i.e. meta-analysis). Studies can be summarized as a meta-analysis without having been collected through systematic review (this seems to apply to many of the ganzfeld meta-analyses). The critical step in the process (in terms of risk of bias and validity of the summary) is the systematic review part.

Linda

Here I will respectfully diverge with you. Meta-analyses in parapsychology involve a thorough search of the greater study population according to pre-specified inclusion criteria, these days with an emphasis on the grey literature, which means they generally meet (and frequently exceed) the standard requirements for a systematic collation of studies. Moreover, where the file-drawer is concerned, ganzfeld studies in particular draw from regions of low bias—small, concerted research groups where everyone knows everyone else, and the probability of gathering the relevant studies is very high (a fact born out by conservative file-drawer calculations such as Darlington-Hayes and Orwin's fail-safe, funnel plots, and empirical searches of the file-drawer). I would quickly note, though, that this isn't the case for psychokinesis; mind-matter studies are much easier to produce and file away, so the checks against selective reporting need to be much more intense. When I have time, I intend to go through Bosch et al. (2004) to see how well it guarded against bias, as well as independent reviews of the MA.

This isn't to say there aren't lax meta-analyses in parapsychology that manage to fail one or more of these requirements; Radin's Entangled Minds MA pops into memory; but for the most part a meta-analysis like Storm (2010) is a systematic review. Was Storm (2010) a high quality systematic review? I don't think so (for me "high quality" means exceptional); Storm et al's MA, for example, did not make use of the best statistical techniques such as random effects models and mixed effects models (it instead used a standard Stouffer Z test—although the more accurate exact binomial was also provided as a supplement); nor did it include a narrative review of the most significant, highly powered studies; nor did it perform calculations on a subset of highly-powered studies only (a technique that has been shown to significantly reduce the effects of the file-drawer); nor did it thoroughly explore moderator variables, etc. But it's a pretty nice systematic review, all things considered—it just could have been better. Honestly, on a subject like parapsychology, I would like to see a meta-analysis that is obviously overdone—pre-specified analyses of important relationships, post-hoc analyses of important relationships, etc—the kind that Maaneli, Tressoldi, and Storm will be performing in the near-future. However, together with Maaneli, I have done many of the kinds of analyses I would have wanted to see in Storm et al. (2010); for example, study sample size vs. outcomes (positive) and study quality vs. outcomes (positive) in the most recent subset of studies.

I'm willing to bet money that if you gather only the most highly powered studies (and there are a number of them) in the ganzfeld and perform a systematic review of just those, you will find an enhanced effect. My prediction is based upon the analyses we have done for our paper. If anyone would like me to elaborate on why many Gz studies are highly powered, I will do so.
 
Last edited:
So I was curious. I gathered together the six studies in the most recent Gz database of Storm (2010) that had 100 trials or more (Goulding, Westerlund, Parker & Wakerman, 2004; Lau, 2004; Putz, Glasser & Wackerman, 2007; Smith & Savva, 2008; Parra & Villanueva, 2006; and Dalton, 1997) and found a hit rate of 34.89%, with a binomial p = 1.0589*10^-9. This is higher than the 32.2% HR reported as the proportion of hits for all 30 studies, and since it is exceedingly unlikely any experiments were missed with an n of 100 or higher from 1997 to 2008, this is evidence against a file-drawer explanation for that period.

Although I suspect you disagree with many of my conclusions, Linda (most namely: psi exists!), I also think you would agree with me that ganzfeld studies provide intriguing evidence deserving of replication on a large scale, with improved protocols. In answer to a question you posed to me in a previous thread, BTW, yes, Maaneli and I discuss ways to guard against bias in our paper.
 
How do you determine "trumped" though?

It's simply the impression that I have got after reading several books and articles on statistics: that, if you give an expert in a particular topic the results of a meta-analysis and the results of a large-scale replication, the expert will give more evidential weight to the replication. (In theory, anyway. In practice, the expert will probably favour which ever result he agrees with.)
 
To whom it may interest, I thought I might provide my basic, roughshod personal perspective on skeptics, parapsychology, and psi, so you may know where I come from.

For those of you who have seen me post me for a while, it is probably apparent to you that my opinion has greatly mellowed over the years. I came into parapsychology believing the evidence was solid, unshakable, and immediately convincing, and that skeptics were ideologically driven and ignorant. Nowadays, after having participated in some of the research itself, corresponded with parapsychologists, and endlessly debated the data with people, I have come to see some of that irrational, emotional, and ignorant personality in all of us—but also a much higher level of contribution from both sides than I expected. At the same time, having obtained what I believe to be a more realistic picture of parapsychology, I have many times been taken off my guard by the quality of the evidence there actually is for some forms of psi—some of it quite difficult to find—making me see great promise in a variety of novel approaches, the most important of these, from a social perspective, being open-access.

The major problem with parapsychology today, IMO, is that it is for the most part a community of insiders. In some respects, it is understandable that it hasn't been accepted it into the mainstream, since paywalls and subscription journals continue to dominate the field (as they have dominated other fields), keeping scientists in other areas a step away from the raw data. Parapsychology's major institutions, moreover, seem to be exclusively concerned with research, making few efforts to educate the public about their results. While this may have been understandable in the past, today, in the era of Big Data, people cannot afford to comb obscure archives for poor quality PDFs (no matter how interesting their undiscovered results may be), to intensely scrutinize a field they're not even sure will be worth their time to examine. I believe we need innovative open-source databases, visual presentations, and tools that allow just about anyone to explore the data with their own eyes, as one condition without which integration with the mainstream will be difficult.

On skepticism: I hold a moderate opinion. Here at Skeptiko, generally, I believe the regular skeptics have arrived at their positions through a reasonable process, and now hold opinions on the experimental evidence of a respectable, defensible character, given their experiences and efforts (though naturally I and others disagree with many of them, and there may be an exception or two).

Where we agree, though, is fundamental, and often overlooked: (1) parapsychology is not a pseudoscience, (2) the results from parapsychology are intriguing, and (3) parapsychology is deserving of, and would benefit from, more serious mainstream attention (e.g. scrutiny, replication, funding, etc)—even if we disagree on precisely why that hasn't happened yet (although, again, the points of agreement even here are pretty surprising). We recognize a common form of skeptical conduct, as well, that is unproductive and ill-advised (e.g. derision, scorn, ignorant pronouncements without evidence, etc), agreeing that this is fairly common in the wider skeptical community in places like the JREF, where the underexposed are concerned (though, perhaps, typical to some degree in everyone).

Indeed, with respect to what I consider to be the most important position on this debate, we find ourselves not at cross-purposes but hand-in-hand—for whatsoever a proponent may think about the individual conclusions of any of the main skeptics on this board, there is one position where I would trust almost all of them to stand with me, as inexorably as most any other proponent on this board would stand with me: supporting the advancement of parapsychology as a field (i.e. improving its methodology, human resources, financial resources, open-access of data, etc). This is what counts, IMO, and what allows for a productive discourse to be possible in the best tradition of science and philosophy (allowing even for the generation of ideas that may actually have a positive impact on the field itself!).

Of course, insofar as other positions are concerned (e.g. NDEs, materialism, case reports, etc), I realize my generalizations may not apply; it is for those knowledgeable in such areas to decide for themselves. I claim only that, with regard to parapsychology, the ground for productive exchange exists, is alive, and is making progress. If you look at the very first posts from Arouet, Paul, and even Linda back on the JREF, and compare them to their posts now, I believe you will see a marked difference in the quality of their conclusions. The same is true of several proponent posters.

Finally, I will address my opinion of this community, generally. There has indeed been a lot of ugliness here in the past, and that I do believe many among us could and should have acted differently from the way we did. However, whereas I am not afraid to say there has been disingenuity, intellectual dishonesty, and unwarranted provocation in many previous conversations, I will admit—and this is probably the closest any offended parties will ever get to an apology, so hear this well—that I am guilty of all three charges, at one time or another. Sorry! Those days are past us, and I value the expertise, well-rounded opinions, and intelligent commentary of this community's intelligent commentators.

There you have it: my views. I hope some among you will find them agreeable—but if you don't, I want you to know that there is a non-negligible possibility we can still get along. :-)
 
Johann,

Parapsychology has never had as much interest for me as spirituality; to me, it doesn't matter much whether or not psi exists. For one thing, my spiritual views don't rely on it doing so, though if it does, they accommodate that. I'm about as psychic as a house brick myself; also, my understanding of statistics is somewhat limited, and your contributions, erudite as I sense they are--and as sure as I feel of your principled sincerity--go a little over my head.

Whatever, I don't feel in any case that it's primarily a question of evidence. It's more one of worldviews, which one challenges at peril if they happen to be considered orthodox. It takes very strong evidence indeed to overturn a worldview. I'm not saying that never happens, even in the case sometimes of dedicated sceptics, but it's not common.

It's the arcane nature, for many laypeople, of some of the statistics involved, that presents a barrier to effective communication of the evidence. We're aware of how statistics and statistical methods can be abused, and how confirmation bias is rampant in a number of controversial areas. Many of us distrust statistics, and it's not so much that we accept or reject them, as that we ignore them. Anecdotal evidence is something that people can understand, and for all the cautions against relying on anecdote, it may often be the most persuasive.

If psi exists, plainly it's a marginal phenomenon which a lot of people, me included, have no certain personal experience of. I appreciate that many people accept on trust the existence of the Higgs Boson, which was determined on statistical grounds, but then again, the existence or non-existence of that isn't of much import in people's lives. It's only when it is of import that the arguments, often acrimonious, begin.

I don't think that statistical studies are ever going to prove the decisive factor in the acceptance of psi, however confident researchers are of their analyses. Often, P is deemed = 1 in such things either through individual experience, or unshakeable conviction in the absence of that experience. Personally, lacking either the experience or unshakeable conviction, I'm an agnostic, though am genuinely open to, and probably lean towards, the existence of psi: and that is based more on persuasive anecdotal evidence than statistics.

We've discussed here before the fact that in a strict sense, "anecdotal" simply means unpublished rather than evidentially unreliable. In fact, it seems that "anecdotes" have often been published, even in peer-reviewed literature. And in some areas of scientific interest (Darwinian evolution comes to mind), quite often things that are published in journals don't even reach the level of evidential unreliability: they're pure speculation for which, if anything, the actual evidence is against them. Nonetheless, they may be deemed written on tablets of stone because they fit in with a certain worldview.

So: what would need to change would be the materialistic worldview of many scientists. Changes in worldview can happen very quickly in society at large: I mean, having been born in 1950, I've seen enormous changes in my lifetime. However, in science, if anything the trend has been in the opposite direction: towards entrenchment of views. At one time, it was more respectable to consider the possible existence of psi, and in general, to consider possibilities that ran against consensus. There might have been opposition to unconventional theories, etc., but scientists have never had to be more careful about openly bucking convention. Why is that, I wonder?

It's a combination of things. First, it's the way the system is organised and who controls it: and as usual, one has to follow the money. Big Science is all about funding, and the biggest sin is doing or saying anything that will threaten that. Second, it may be a reaction to the very speed of change in society at large. There have always been large numbers of people who believe in psi, but never has it been so out-in-the-open and so widely discussed, so that it can't be ignored and dismissed out of hand.

Materialists may see that as the danger of superstition prevailing, and to be sure, there's a lot of woo out there, but equally, there are a lot of educated laypeople with discriminating minds who don't accept the woo and yet still find there is a residual that can't be easily dismissed. Which relates to a third point, the fact that people are more educated than ever and now have information sources, especially the Internet, which are wide-ranging and accessible with lightening speed.

It's creating the possibility of global dissent from various orthodoxies, together with awareness of who the most qualified spokesmen on all sides (who may have their own Web sites and/or blogs) are. Anyone with an open mind coming fresh to a particular area of interest can rapidly identify who/what the major players and ideas are. Of course, anyone with a prior predilection can find like-minded support that confirms their opinions. One frequently finds groups that are at each other's throats, but indirectly, firing broadsides from the safety of their own dugouts.

Skeptiko is a place where, in theory at least, both sides of the psi debate can engage in constructive discussion, though I find it pointless conversing with some people (on both sides) because, as Alex pithily puts it, they are "stuck on stupid" and constantly regurgitate the same points: so I end up putting them on ignore to stop them wasting my time (and theirs, did they but know it).

I didn't know where my comment was going when I started, but for whatever it's worth, I find this is what I have said. Whether it's any use is a different matter! :)
 
While this may have been understandable in the past, today, in the era of Big Data, people cannot afford to comb obscure archives for poor quality PDFs (no matter how interesting their undiscovered results may be), to intensely scrutinize a field they're not even sure will be worth their time to examine. I believe we need innovative open-source databases, visual presentations, and tools that allow just about anyone to explore the data with their own eyes, as one condition without which integration with the mainstream will be difficult.

That kind of database isn't particularly difficult to build, though it would require people with a lot of free time to contribute to the bibliography.
 
With all due respect to all this attention to detail, I can't see the need for any more replication style Ganzfeld experiments. Telepathy has already been proven by any sane scientific standard and there is no need for additional proof. It's a much better idea to spend the money and effort on more interesting research.

At this point, with replicated studies in so many areas of psi research and with general acceptance of psi around the world except in some narrow, hierarchical, white male dominated areas (a.k.a. academia and mainstream press) you have to work hard not to accept the reality of psi.

When academics bizarrely refer to parapsychology as a "control" science which supposedly demonstrates a flaw in statistics, they have reached batshit crazy land. No sane person will ever take this seriously. It is the ultimate example of people explaining away evidence that they don't like and a reflection of emotional immaturity. No amount of evidence can ever convince people who would rather throw vast tracts of science under the bus than accept the reality of psi.
 
...except in some narrow, hierarchical, white male dominated areas (a.k.a. academia and mainstream press) you have to work hard not to accept the reality of psi.
While I might sympathise with the overall force of your point, Craig, I do rather object to the implied racism and sexism in this comment. If academia were dominated by black or brown women, would it be any different? You don't know, do you? I'm tired of people taking pot shots at white males because it's PC to do so. I'm one of them, and am just as offended by what you said as people of colour are when they are stereotyped.
 
I have many times been taken off my guard by the quality of the evidence there actually is for some forms of psi—some of it quite difficult to find—making me see great promise in a variety of novel approaches, the most important of these, from a social perspective, being open-access.
Can you give an examples of the kind of evidence / phenomena that you refer to--and possibly a hint at the kind of novel approach to studying it that this triggered?

Cheers,
Bill
 
While I might sympathise with the overall force of your point, Craig, I do rather object to the implied racism and sexism in this comment. If academia were dominated by black or brown women, would it be any different? You don't know, do you? I'm tired of people taking pot shots at white males because it's PC to do so. I'm one of them, and am just as offended by what you said as people of colour are when they are stereotyped.

I'm an older white male, so I fit in the category I'm criticizing by the way. It's not the skin color that matters of course, but rather the cultural baggage that goes along with it and the gender. The fact that skeptics tend to be well educated white males is a result of the effects of them lacking a well rounded education and a wide variety of interests and experiences, (because they're devoting most of their time to their specialties) and geography (Influenced heavily by post industrial western civilization's values, attitudes, ideas and beliefs.) and the fact that overall, men tend to be better at tasks requiring a very narrow focus and worse at tasks involving generalized awareness than women.
 
Here I will respectfully diverge with you. Meta-analyses in parapsychology involve a thorough search of the greater study population according to pre-specified inclusion criteria, these days with an emphasis on the grey literature, which means they generally meet (and frequently exceed) the standard requirements for a systematic collation of studies.

I don't disagree that they are getting better at collecting studies these days. However, there is still no comparison between a description of the search and specification of inclusion and exclusion criteria for most Ganzfeld meta-analyses and a typical Cochrane Collaboration meta-analysis. And we know from the work done by Ersby that even Storm and Tressoldi missed and mis-characterized studies.

This isn't to say there aren't lax meta-analyses in parapsychology that manage to fail one or more of these requirements; Radin's Entangled Minds MA pops into memory; but for the most part a meta-analysis like Storm (2010) is a systematic review. Was Storm (2010) a high quality systematic review? I don't think so (for me "high quality" means exceptional); Storm et al's MA, for example, did not make use of the best statistical techniques such as random effects models and mixed effects models (it instead used a standard Stouffer Z test—although the more accurate exact binomial was also provided as a supplement); nor did it include a narrative review of the most significant, highly powered studies; nor did it perform calculations on a subset of highly-powered studies only (a technique that has been shown to significantly reduce the effects of the file-drawer); nor did it thoroughly explore moderator variables, etc. But it's a pretty nice systematic review, all things considered—it just could have been better. Honestly, on a subject like parapsychology, I would like to see a meta-analysis that is obviously overdone—pre-specified analyses of important relationships, post-hoc analyses of important relationships, etc—the kind that Maaneli, Tressoldi, and Storm will be performing in the near-future. However, together with Maaneli, I have done many of the kinds of analyses I would have wanted to see in Storm et al. (2010); for example, study sample size vs. outcomes (positive) and study quality vs. outcomes (positive) in the most recent subset of studies.

Note that the exact binomial test on pooled results, as used by parapsychologists, is on the list of "not recommended" in terms of performing a good quality meta-analysis (I don't think it helps (and it obviously isn't necessary, since the significance of the results don't depend upon it) to bring up tests which involve going out on a limb when others (non-proponent scientists) are used to conservative testing). Although "adequately powered" is one of the elements of quality, there are too many other elements involved to be able to make much use of analyses which look only at size. And since the Ganzfeld design, on its own, is only at best "fair quality", any analyses of quality vs. outcome hit a ceiling just at the point where they could begin to be useful. I agree that using the analysis to explore moderator variables is probably one of the most important uses of a meta-analysis under these conditions. But I think that rather than plans for continued data massaging of reduced quality studies, it makes more sense to use that information to go forward to plan and perform Ganzfeld confirmation studies at low risk of bias/good quality.

I am glad to hear that you and Maaneli discuss ways to guard against bias (or as I put it, "operate in an environment at a low risk of bias"). Your previous description of your paper sounded like what I have suggested above (and have suggested for several years), so I wanted to confirm that we were on the same track with this.

Linda
 
Note that the exact binomial test on pooled results, as used by parapsychologists, is on the list of "not recommended" in terms of performing a good quality meta-analysis (I don't think it helps (and it obviously isn't necessary, since the significance of the results don't depend upon it) to bring up tests which involve going out on a limb when others (non-proponent scientists) are used to conservative testing).

I think there's going to be continued annoyance with phrases like this, since there is a percieved null difference in how many Nth degrees of paranoia a parapsychology paper employs over a classical one and whether or not "methodological issues" are rubber stamped over the paper. Scott's writing openly admits that even with a "MA which proves its point 100%" would not be acceptable enough to make an argument for psi, he would prefer to classify it as flawed even though for the purposes of his own thought experiment the MA is 100% reliable.

I'll continue to hold that they need to move on to more developed studies (e.g. how can we coax a more powerful result, what can we do with this result) instead of doing more baseline studies of any kind. If the null holds, more developed studies aren't possible and if the null fails the further progressed studies will bear that out as well as giving more certainty. Problems related to "does any psi-like mechanism seem to exist or not" are no longer unique psi studies, so I hold the answer is to move on to something more interesting instead of aiming for impossibly ever better standards.
 
Back
Top