University of Virginia's DOPS series: Psychokinesis studies

#1
Since last week, The Epoch Times has been running a series about the different studies being carried out at UVA's Division of Perceptual Studies. I refrained from posting it earlier because the first entry was with good ol' Bruce Greyson and, I believe, that most of us are familiar with his work. However, this week's issue discussed how they are using the sensors designed by Dr. Ross Dunseath to advance the previous work of Dr. Ed Kelly in psychokinesis and psi in general. The article included some updates on their current studies which I tought would be interesting for some here. If nothing else, I guess that their pragmatic approach is quite interesting and provides a contrast to the more outspoken researches. Enjoy: http://www.theepochtimes.com/n3/138...s-in-a-lab-researchers-taking-psi-mainstream/
 
#2
Interesting study that suggests even micro-PK research needs to use selected individuals to get significant results.

http://www.scribd.com/doc/270839255/Book-of-Abstracts-2015-PA-SPR-Joint-Convention

MICRO-PSYCHOKINESIS: EXCEPTIONAL OR UNIVERSAL? [PA]
Mario P. Varvoglis & Peter Bancel
Institut Métapsychique International Paris, France

ABSTRACT
Putative psychokinetic effects reported in association with RNGs are generally qualified as micro- psychokinetic, suggesting a distinction between large-scale or directly perceptible PK and extremely subtle effects which can only be inferred through statistical methods. Whether or not the distinction between micro- and macro-PK is justified, it has led to two divergent research strategies, especially with respect to the presumed agent or source of the effect. On the one hand, investigations of macro-PK have largely been based on an elitist approach, involving exceptional individuals (or exceptional circumstances, as in the case of poltergesists), micro-PK effects are assumed to be potentially widespread or smoothly distributed in the general population. As a result, micro-PK research has been frequently approached through a universalist approach, employing massive data collection from a large number of unselected subjects. In this paper, we examine which of the two approaches seems most promising today, more than 4 decades after the launching of microPK research with RNGs. We examine two contrasting testing strategies: that of Helmut Schmidt, whose approach is highly personalized and reminiscent of elitist research with small numbers of selected individuals; and that of the PEAR lab whose 12-year benchmark study was based on a highly standardized protocol involving nearly 100 unselected subjects. Together these two bodies of research constitute a substantial portion of the entire micro-PK literature and thus afford a good approximation to the issue examined. Helmut Schmidt, who is rightfully considered the "father" of micro-PK RNG research, was a highly prolific investigator and produced by far the strongest and most consistent results in the field. In our review of his work we found 22 experimental publications containing 50 independent studies, of which 3/4 reported significance (p < .05) and nearly half had zs above 3 (Varvoglis & Bancel, in press). We suggest that his striking success, over the course of about 3 decades, was due to four factors: subject selection,, on the basis of pretests and other criteria; a strong role in engaging and motivating subjects; a willingness to adjust protocols and sessions to subjects' needs; and, most likely, a good deal of micro-PK skill of his own. The Princeton Engineering Anomalies Research (PEAR) laboratory, founded in 1979 by Robert Jahn, was committed to a strict universalist approach, involving volunteers whose participation depended essentially on their own availability and willingness, and a progressive accumulation of data using the same protocol over many years. Each experimental run consisted of three separate PK efforts of equal length, and corresponded to the subject's intention to bias RNG outputs to go high (HI-aim), to go low (LO-aim), or to remain even (Baseline or BL aim). The experimental hypothesis was that the HI runs would give a positive deviation from the mean and the LO runs a negative deviation; the statistical test was based on the difference of the two directional runs. The 12-year benchmark experiment collected over 2.5 million experimental trials from 91 participants, equally distributed across HI, LO and BL conditions. At its termination, the experiment had attained high significance, yielding a z score of 3.8 (Jahn, Dunne, Nelson, Dobyns, & Bradish, 1997). The result is particularly noteworthy insofar as PEAR had a firm policy of publishing all its experimental results in either refereed journals or publicly accessible internal reports, thus ensuring that this database can be considered free of publication bias and file-drawer problems. A large-scale replication was undertaken by a Consortium involving two German laboratories plus PEAR itself. A power estimate was derived from the effect size obtained by the benchmark study, and over the course of 3 years, the three labs ran a total of 227 subjects, each lab collecting 750,000 trials that were equally distributed across the three conditions (HI-LO-BL). Given the power of this study, the primary hypothesis, involving a significant difference between HI and LO scores, should have had an 85% chance of succeeding at a p value level less than .01. However, the effect size was nearly an order of magnitude smaller than expected and the overall z score came in at a nonsignificant 0.6. While the combined PEAR + Consortium results were still significant, with a z of 3.2, the apparent failure to replicate a solid and well-founded prediction, despite a well planned collaborative study, remained quite surprising. We believe that the reason for this apparent failure to replicate was in fact quite simple : the replication over-estimated the true effect size of the PEAR benchmark study, and thus grossly underestimated the power needed to replicate. A close look at the PEAR benchmark study shows that there were two extreme outliers in the participant pool who contributed nearly a quarter of the total data and who each obtained highly significant personal z-scores (5.6 and 3.4). This resulted in their contributing more than half of the total HI–LO deviation. It is easy to see that they are not representative of the 89 other participants as the overall z of the remaining database is only 0.8. Indeed, if we exclude these two outliers, and focus on the database of the 89 remaining participants, we obtain nearly the same effect size and z score as in the Consortium replication. We conclude that the PEAR/Consortium studies, give rather weak support to the universalist assumption. They instead point to the wisdom of an elitist approach: working intensively with a few subjects rather than teasing extremely weak effects out of unselected volunteers. Coupled with Schmidt's success, these data suggest, as a working hypothesis that micro-PK is not widely distributed but rather exceptional, and that investigators should adapt their research strategies accordingly. This means using widespread pretests to select potentially promising subjects ; adopting flexible testing conditions, and ensuring that subjects are motivated and engaged ; exploring optimization techniques which may enhance scoring (just as "noise-reduction" techniques seem to enhance ESP scoring); and closely study and model the experimental style of highly successful investigators such as Helmut Schmidt and others, so as to better understand the dynamics of psi-conducive research.
 
#4
Ross Dunseath mentions more about his work in this article about DOPS:


http://richmondmagazine.com/news/features/paranormal-studies/

"We’ve had people tested here who claimed to be able to do things at will,” says Ross Dunseath, co-director of the Westphal lab, adding that the Shielded Room was specifically created for psychological data-gathering. “We’ve tested people who say they are able to leave their bodies and affect other things.” So far, while some experiments have fizzled out (and not all claims are taken seriously), they’ve detected no obvious hoaxsters.

Dunseath is holding a device that measures a person’s respiration, as he points out the room’s 128-channel EEG system. “We usually set the person here in this chair and present various stimuli, audio and visual.” There’s another room in the office, he says, across the hall and upstairs, where they test to see if people are “presumably transmitting. We see if there is telepathic communication between the two.”


He tells of one subject who had a near-death experience and began reporting psychic abilities. To test her, they set up an Egely Wheel, a so-called “vitality indicator” not influenced by heat, convection or electromagnetic energy. The wheel was put in a sealed plastic case to isolate it from air currents, and he says that the woman made the wheel rotate in trial after trial. “She would put her hand near it, and it would take 15 to 30 seconds before it started rotating. We did 10 trials of that. Every last one of those trials, it rotated.”

Dunseath still marvels. “I’d like to get her back in here.”

He does a podcast interview here:
https://shatteredrealitypodcast.wordpress.com/2015/04/17/dr-ross-dunseath-and-psychokinesis/
 
Top