Mind-Matter Interaction at Distance: Effects on a Random Event Generator (REG)

http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2423702

Abstract:
We used a new protocol to test whether subjects could influence the activity of a distant random event generator (REG). In a pilot study, participants selected for their strong motivation and capacity to control their mental activity were requested to alter the functioning of a REG, located in a laboratory approximately 190 km so as to achieve a deviation of ± 1.65 standard scores from the expected mean, during sessions lasting approximately 90 seconds. The predefined cutoff was achieved in 78% of 50 experimental sessions compared to 48% of the control sessions.

This study was replicated with a pre-registered confirmatory study involving thirty-four participants selected according the same criteria as in the pilot study. Each participant contributed three sessions completed in three different days giving a total of 102 sessions. The same number of control sessions was carried out.

The percentage of the experimental sessions which achieved the predefined cutoff was 82.3% out of 102, compared to 13.7% for the control ones.

We discuss the opportunities for exploiting this protocol as a mental telecommunication device.
 
Trying to make sense of the protocols here.

Psyleron REG: I couldn't find any info on the Psyleron site that really explained how the REG works. They say its a quantum RNG, and vaguely refer to Quantum Tunnelling and filtering out physical factors, but I couldn't find a description of how it did so. Not saying that I'm presuming there is anything wrong with it, just that it would be nice to know exactly what kind of RNG it is and how it differs from other hardware RNGs.

Experimental sessions: I wasn't clear on how the sessions were determined. The authors write: " Sessions lasted from 60 to 200 seconds with setting of “Sample Size 10 bit” and “2 Per Second” on the Psyleron™ REG-1. " Does anyone know what that means? How was it determined how long each session was? What is the reason for the different sessions lengths? Why not keep all sessions to the same length of time?

The registry entry (glad to see they did that for the control experiement!) describes it as "to achieve a cumulative deviation above or below 1.65 with respect to the expected mean." So were they just concentrating until that result was achieved? If the sessions were different legnths, isn't that going to be different for each length?

Not a lot of discussion about the statistical methods used and why. (Perhaps in this case it is obvious to those with science background - I know they have no duty to write with the lay person in mind but I do wish they (and all scientists for that matter) made more of an effort to explain more!


Controls: "
We compared the number of experimental and control sessions of the same length and bit rate in
which the cutoff was achieved. The control sessions were recorded by a single operator inside the laboratory where the REG was located." Does this mean that if there was 5 experimental sessions of 60 seconds, and 5 experimental sessions of 120 seconds, they would just record 5 controls sessions of 60 seconds, and 5 control sessions of 120 seconds? Ie: they duplicate the subject sessions? The wording is unclear here: by "We compared the number of experimental and control sessions of the same length and bit rate in which the cutoff was achieved." it could also imply that there were additional control and experimental sessions done? They don't really give much information at all about how these sessions were conducted.

Results: Can someone clarify exactly what the goal was here? What results would be expected under the null? How often should we get to this cutoff under the null? Is the null 50%?

Replication: This time they seem to have limited all sessions to 60 seconds (which makes sense to me). Here they seem to be more specific that they performed a matched control session. They go into more details on how the people prepared - not sure if that means it was different in the original experiment.

Results: Here they say the average session was 62 seconds? Earlier seemed to say that they were all set to 60 seconds? Here the control went way down in the opposite direction. at 13.7%. The authors suggest the only plausible reason could have been the location of the operator. The don't go into any more detail. Could other explanations be that the device isn't a great RNG? Or that the results are all over the place (13.7%, 48%. 78%, 82.3%) which just indicates that what we're seeing is normal variance?

How should we interpret the results of this replication? Should it be considered a failure? They say an efficiency of 80% is good but not perfect? What would perfect have been? And what should the 80% represent?

They seem to move on to calling the experiment a success and suggesting working on improving the results through motivated participants without suggesting that future experiments should also try and figure out what happened with the control results? Or should that be implied?

I'm not sure whether I'm just missing a lot or whether there really should be more details provided. They published the data as well, I'll have to take a look at it to see if it helps, but thought I'd post this to get discussion going.

Please note I have not made up my mind on these results - the questions I pose here are real questions and hopefully the forum experts will chime in!

Would also like comments (whatever one thinks of the results) of whether the level of detail provided in the methods seems adequate?
 
On the stats used:

Not sure if it makes any difference. The registry entry says they used a one-tailed test. If they were looking at the deviation going + or - 1.65 would that suggest a 2 tailed test should be used? If not, then what does the bilateralness aim at in deciding to use a two tailed test?

(again: I don't know if it makes a difference anyway in this case, just trying to understand)
 
Results: Can someone clarify exactly what the goal was here? What results would be expected under the null? How often should we get to this cutoff under the null? Is the null 50%?

I didn't find the paper at all clear, but from the description it sounds as though a hit rate of just under 20% should be expected under the null hypothesis. That would mean the results were stupendously significant, and even the results of the control sessions in the pilot study would be highly significant.

But I don't understand why they should need control sessions at all if the random event generator is working properly.
 
I didn't find the paper at all clear, but from the description it sounds as though a hit rate of just under 20% should be expected under the null hypothesis.

What are you basing this on? (You probably didn't mean hit rate - in this one they are going for a cumulative deviation.
 
I'm encouraged they're getting positive results. And even better, as Johann conveyed (not on the forum) the study followed the strict perimeters set out by the KPU at Edinburgh

How do you interpret the results as positive? (serious question: as I mentioned above, the results seem to go all over the place?) Can you help explain what they were measuring, what the null result should be expected to be, and how they figured that out?

Anyone?

Where did Johann post that? Is this being discussed on another forum?
 
How do you interpret the results as positive? (serious question: as I mentioned above, the results seem to go all over the place?) Can you help explain what they were measuring, what the null result should be expected to be, and how they figured that out?

Anyone?

Where did Johann post that? Is this being discussed on another forum?

I heard from Johann via Facebook.
 
What are you basing this on? (You probably didn't mean hit rate - in this one they are going for a cumulative deviation.

By "hit", I mean a session in which the threshold of ± 1.65 standard deviations from the starting point was achieved. It looks as though, under the null hypothesis, that should be achieved just under 20% of the time.

That's based on the following. If you have a Brownian path starting at position 0, and let it run for a specified time, call the position after that time X and the maximum achieved up to that time M. Then the probability of M lying between two positive values M_1 and M_2 is known to be twice the probability of X lying between those values. So from the choice of the cut-off, the probability of the maximum being more than 1.65 standard deviations is 10%. Similarly, the probability of the minimum being less than -1.65 standard deviations is 10%. In a few cases (but not many) both those conditions will be satisfied. So if we just added 10% and 10% we would be counting a few instances twice. So I reckon the probability of the cut-off being achieved is just under 20%.

No doubt someone will rapidly correct me if I'm wrong!
 
Having posted that, I feel there must be something wrong with either my interpretation or my arithmetic, because on that basis this would be one of the most astounding experimental results ever recorded.
 
Having posted that, I feel there must be something wrong with either my interpretation or my arithmetic, because on that basis this would be one of the most astounding experimental results ever recorded.

I was thinking along those lines as well - they seemed to only be somewhat encouraged by the results.

I really wish they had done a much better job of describing their methodology.
 
Maybe I'm being stupid, but if the statistics are two-tailed, shouldn't it be 2 standard deviations, not 1.65?
 
Maybe I'm being stupid, but if the statistics are two-tailed, shouldn't it be 2 standard deviations, not 1.65?

Each tail is 5%. It's two-tailed in the sense that either an increase above 1.65 or a decrease below -1.65 counts as a "hit" in an individual session. So the 1.65 cut-off isn't really the basis of the statistics. The statistics would need to be done on the number of sessions where that threshold is achieved - e.g. in calculating how significant an 82.3% hit-rate is.

There is a Bayes factor there of more than 10^11, so maybe it really is as significant as it appears. But if so, it's bizarre that they don't comment on it.
 
Maybe I'm being stupid, but if the statistics are two-tailed, shouldn't it be 2 standard deviations, not 1.65?

They did a one tailed test. I was asking to clarify since I thought that if we were looking plus or minus (ie: two directions) that would indicated a 2 tailed test should be used - but I may be just misunderstanding this concept here.
 
I think I have indeed misinterpreted what's happening. Looking at the figures, the location of the cut-off is increasing with time, so that 20% figure will be much too low. Quite how it's increasing with time isn't clear - it doesn't appear to be growing in proportion to the square root of t curves shown in the graphs.
 
Back
Top