Mod+ Johann and Maaneli's recommendations for parapsychology research

Right, but we have an engine right now that seemingly violates COM. We might need to rewrite some of the book, but let's admit it; we're only really on page one of the big book of science.

What's COM?

Also, our knowledge of the standard model is almost assuredly flawed. Most physicists will not argue that point.

Indeed it must be flawled in some parts if it doesn't account for all (specially gravity). However one thing is to say it's flawled, and the other it's to point out exactly where. Nobody seems to expect a new force coming that isn't in extremely low ranges, so it would come as a great suprise that there is such force. Given that that part seems to be in the area of "extremely confident about this", we need top quality evidence to re-write the book of science.

I'm not saying we shouldn't, I'm just saying that we need to be extra-cautious when whats at stake is something of such magnitude so it does seem to be needed that parapsychologists step right up with the most strict protocol possible which seem to include pre-registration and many other things that pharmacology use.
 
Subtle verbal and physical cues, thin walls, improper randomization, failure to double blind, failure to report findings after unsuccessful studies.

The innate risk of bias doesn't have to do with the protocol being conducted properly. The innate risk of bias is in the protocol itself. Ganzfeld protocol will have some innate risk of bias simply because we're dealing with people and things that may or may not be out of our control.

Thanks. But really by "properly conducted" I meant studies with no sensory leaks, adequate randomisation and double blinding. Also, in a double blind protocol I don't think there should be any scope for verbal and physical cues. And I'd have thought failure to report unsuccessful studies could apply to any kind of study, including presentiment.

Granted, in principle there might be scope for cheating by the sender and receiver, perhaps by hi-tech means, which wouldn't be the case for presentiment. Though isn't that really because they are (or may be) testing different things? That scope would be eliminated in a Ganzfeld experiment without a sender.

On the other hand, without knowing too much about it, my impression is that the interpretation of presentiment measurements is a lot less clear-cut, and offers a lot more scope for controversy about data mining. At least with Ganzfeld experiments the hit-rate is a simple and obvious measure of success, whose statistics are very easy to analyse.
 
What's COM?
Conservation of momentum.

Indeed it must be flawled in some parts if it doesn't account for all (specially gravity). However one thing is to say it's flawled, and the other it's to point out exactly where. Nobody seems to expect a new force coming that isn't in extremely low ranges, so it would come as a great suprise that there is such force. Given that that part seems to be in the area of "extremely confident about this", we need top quality evidence to re-write the book of science.

I'm not saying we shouldn't, I'm just saying that we need to be extra-cautious when whats at stake is something of such magnitude so it does seem to be needed that parapsychologists step right up with the most strict protocol possible which seem to include pre-registration and many other things that pharmacology use.

Right, I'm not a proponent of argument that stronger evidence for psi wouldn't do the field a world of good. I just feel that at this juncture in science we have a lot of missing pieces in our puzzle.
 
Thanks. But really by "properly conducted" I meant studies with no sensory leaks, adequate randomisation and double blinding. Also, in a double blind protocol I don't think there should be any scope for verbal and physical cues. And I'd have thought failure to report unsuccessful studies could apply to any kind of study, including presentiment.

Well, if we're talking about experiments done in the same building under the same controls, then you're correct. However, you have to keep in mind that the environments in which the experiments are conducted and the different experimenter could feasibly contribute to false positives.

Granted, in principle there might be scope for cheating by the sender and receiver, perhaps by hi-tech means, which wouldn't be the case for presentiment. Though isn't that really because they are (or may be) testing different things? That scope would be eliminated in a Ganzfeld experiment without a sender.

Right. They've found an effect even without a sender in Ganzfeld studies. This is a contributing reason why parapsychologists are so high on presentiment right now.

On the other hand, without knowing too much about it, my impression is that the interpretation of presentiment measurements is a lot less clear-cut, and offers a lot more scope for controversy about data mining. At least with Ganzfeld experiments the hit-rate is a simple and obvious measure of success, whose statistics are very easy to analyse.

I'm not sure hit detection methods really vary much between the two. I could be wrong.
 
I'm saying that in a universal registry world you would not include studies that weren't pre-registered regardless of any other factors. They won't factor into the equation at all. The file drawer problem is therefore eliminated. The entire pool of studies for inclusion will be selected from the registry.
I don't see how this eliminates the file drawer problem. Imagine there were half a dozen studies that fit the other inclusion criteria but weren't registered. Say they were all positive: wow! Say they were all negative: oops.

~~ Paul
 
If everyone registered every study, it would eliminate the file drawer. The problem is knowing whether everyone did, indeed, register every study.

~~ Paul

Another reason to use a registry is to help prevent flexibility in outcomes and selective outcome reporting.

The "file drawer" is an issue if the intention is to perform studies which are inadequate to answer research questions on their own and are intended to be combined into meta-analysis. But if the intention is to perform studies which are adequate to reliably and validly answer research questions, then pre-registration is less about the file-drawer and more about addressing the issue of selective reporting within studies.
http://hiv.cochrane.org/sites/hiv.cochrane.org/files/uploads/Ch08_Bias.pdf (page 8.39)

So another question for anyone/everyone...what kinds of studies do you think Johann and Maaneli are suggesting be performed? (I'm not sure which it is, from reading the paper.)

Linda
 
I don't see how this eliminates the file drawer problem. Imagine there were half a dozen studies that fit the other inclusion criteria but weren't registered. Say they were all positive: wow! Say they were all negative: oops.

~~ Paul
In this case, their inclusion in the registry wouldn't be dependent on the outcome. The concern (with respect to a file drawer) is that studies which come to our attention are selected post hoc based on the results. So they aren't a representative sample, but a selected sample. We need a representative sample for meta-analysis.

Studies which are pre-registered should be a more representative sample, since they won't have been selected based on their results (assuming they were included in the registry before their results were known). And unregistered studies would have to be excluded for this to work (which would probably be hard to do if they were all positive :)).

Linda
 
In this case, their inclusion in the registry wouldn't be dependent on the outcome. The concern (with respect to a file drawer) is that studies which come to our attention are selected post hoc based on the results. So they aren't a representative sample, but a selected sample. We need a representative sample for meta-analysis.

Studies which are pre-registered should be a more representative sample, since they won't have been selected based on their results (assuming they were included in the registry before their results were known). And unregistered studies would have to be excluded for this to work (which would probably be hard to do if they were all positive :)).
I can't help but wonder how much work is done before a study is registered. Nor can I help wonder whether past studies give the researchers a feeling for whether the study will be successful and so affects whether they want to register it. But perhaps I'm just being curmudgeonly.

I agree that unregistered studies must be excluded. My concern is that the conclusion of the meta-analysis is incorrect because some studies are missing. This is clearly a problem in all fields and so I guess I'll just relax about it.

What do we do about studies that were registered but "never completed"?

~~ Paul
 
Last edited:
I can't help but wonder how much work is done before a study is registered. Nor can I help wonder whether past studies give the researchers a feeling for whether the study will be successful and so affects whether they want to register it. But perhaps I'm just being curmudgeonly.

Yeah, those will continue to be concerns, even with a registry.

I agree that unregistered studies must be excluded. My concern is that the conclusion of the meta-analysis is incorrect because some studies are missing. This is clearly a problem in all fields and so I guess I'll just relax abut it.

What do we do about studies that were registered but "never completed"?

~~ Paul
All we can do about them is recognize that our (hopefully) representative sample has now become biased.

Personally, I think we should be doing studies which are adequate to reliably and validly answer research questions on their own, so we don't have to to figure out how to fix a problem we should be trying to avoid in the first place.

Linda
 
I'm not sure hit detection methods really vary much between the two. I could be wrong.

Hit detection is indeed less straightforward with presentiment. Ganzfelds ultimately care about whether you pick the target, whereas presentiment studies are based on whether or not some kind of detectable stimulus spike happens prior to the stimulus being displayed. This brings in to question variables like what constitutes a spike versus noise in a mathematical context, how big the spike must be and how long it must happen before the stimulus is deployed to count as a hit, kind of thing. A short enough time window could simply be confabulating the delay it takes for the eyes->brain->concept->speech->thought the subject might have to be presentiment, when in reality its an autonomous survival response for example. So you have to also consider how fast the monitor refreshes and the average response rate of human eyes to noticing particular stimuli as part of whether the spike happens early enough to be a hit or not.

Those things could probably be standardized experimentally.
 
This thread is a booby trap. Any critique will be used as justification to grapeshot the entire forum with pseudo-skepticism. Call me cynical, please.
 
  • Like
Reactions: K9!
Well, if we're talking about experiments done in the same building under the same controls, then you're correct. However, you have to keep in mind that the environments in which the experiments are conducted and the different experimenter could feasibly contribute to false positives.

Sorry, I'm afraid I still don't really understand what you're getting at. Unless I'm missing something, there shouldn't be a problem so long as no one involved in the judging process has (or can infer) any information about which option is the target, should there? And that shouldn't be very hard to ensure.
 
Hit detection is indeed less straightforward with presentiment. Ganzfelds ultimately care about whether you pick the target, whereas presentiment studies are based on whether or not some kind of detectable stimulus spike happens prior to the stimulus being displayed. This brings in to question variables like what constitutes a spike versus noise in a mathematical context, how big the spike must be and how long it must happen before the stimulus is deployed to count as a hit, kind of thing. A short enough time window could simply be confabulating the delay it takes for the eyes->brain->concept->speech->thought the subject might have to be presentiment, when in reality its an autonomous survival response for example. So you have to also consider how fast the monitor refreshes and the average response rate of human eyes to noticing particular stimuli as part of whether the spike happens early enough to be a hit or not.

Yes, and my impression is that it's usually not a clearly identificable spike, but just some measure calculated mathematically from the part of the signal preceding the stimulus. The obvious danger would be that there are a number of different possible measures, and to some extent they reflect independent properties of the signal. So in any particular study one could probably arrive at a statistically significant result by trying enough of them. That won't be a problem if the intended measure is specified before the study is performed, but in cases where that's not done there will be a lot of scope for sceptical criticism.
 
Conservation of momentum.

¿What engine would that be?

Right, I'm not a proponent of argument that stronger evidence for psi wouldn't do the field a world of good. I just feel that at this juncture in science we have a lot of missing pieces in our puzzle.

Indeed we have. The critical point is where we should throw all the money at, since there are many people claiming to know whats the next Big Thing in science, but resources and brains are limited. High quality studies may just save us millions of dollars and thousands of hours of time by giving us the best evidence to be sure what we are dealing with.

If it was me in charge, I would throw a lot more money at parapsychology just to get the best possible evidence, then get it analyzed by the top statisticians and then see if I throw more money, but people seem to always ignore parapsychology in the science budget, which is just sad. I once read that the studies in parapsychology don't amount to even a month of the studies done in other areas of science; in a sense, they don't even have a chance to prove psi worthy of study.
 
Last edited:
Sorry, I'm afraid I still don't really understand what you're getting at. Unless I'm missing something, there shouldn't be a problem so long as no one involved in the judging process has (or can infer) any information about which option is the target, should there? And that shouldn't be very hard to ensure.
My point is that it's harder to ensure than you might think. For instance, let's say the standard distance between the sender and receiver is 53 feet. 53 feet may be sufficient for medium thickness walls, but want about thin walls? My point is that there is no homogenous testing standards; all of them will have slight variations because they're done in different places with different people.
 
¿What engine would that be?



Indeed we have. The critical point is where we should throw all the money at, since there are many people claiming to know whats the next Big Thing in science, but resources and brains are limited. High quality studies may just save us millions of dollars and thousands of hours of time by giving us the best evidence to be sure what we are dealing with.

If it was me in charge, I would throw a lot more money at parapsychology just to get the best possible evidence, then get it analyzed by the top statisticians and then see if I throw more money, but people seem to always ignore parapsychology in the science budget, which is just sad. I once read that the studies in parapsychology don't amount to even a month of the studies done in other areas of science; in a sense, they don't even have a chance to prove psi worthy of study.
Well, the amount of funding parapsychology has received since its inception is equal to about a month and a half of funding received by mainstream psychology.
 
RE: presentiment.

I was referring more to Bem style presentiment experimentation. The person is given a choice between two computerized curtains, behind one is some exotic stimuli, and behind the other is nothing.

I know there are some EEG/EKG presentiment tests going on, but I was referring to those.
 
RE: presentiment.

I was referring more to Bem style presentiment experimentation. The person is given a choice between two computerized curtains, behind one is some exotic stimuli, and behind the other is nothing.

I know there are some EEG/EKG presentiment tests going on, but I was referring to those.

Oh, I see - I was thinking of the ones based on physiological measurements. My comments wouldn't apply to Bem's experiments.
 
My concern is that the conclusion of the meta-analysis is incorrect because some studies are missing.

If each pre-registered study is unbiased, then a meta-analysis of those studies will be unbiased, and so inclusion of unregistered studies would be unnecessary.

What do we do about studies that were registered but "never completed"?

Certainly one way for investigators to cheat the registry would be to peak at the data and only complete positive studies. But this ploy would be transparent, since there would be unfinished studies in the registry. Other questionable research practice, such as selectively excluding data points (which I suspect has always been happening) will be difficult to solve by using a registry.

Of course if presentiment exists it would be simpler to only pre-register the positive studies.
 
Back
Top