Parapsychology: Science or Pseudoscience?

He based this on an extremely flawed criteria as I just described. it is not a valid conclusion.




What spin? I have to cut this quote mid-paragraph because you say there is a spin, suggesting explicitly that Sheldrake is the one being dishonest. I have looked at the actual data and Sheldrake is right.

You cant just say that it was sheldrake and proponents that put a spin on it without being specific. Vague criticisms are not valid.



From the data on the dog experiment it is quite clear that Wiseman is guilty of shenanigans.

Parapsychologists have been responding to VALID criticisms for many decades. The evolution of analysis and experiments makes this pretty clear.

Hi Neil,
Linda has been down this road before on the Sheldrake/dog experiment and has demonstrated that she is immune to reason and evidence. You're wasting your time conversing with her.
 
He based this on an extremely flawed criteria as I just described. it is not a valid conclusion.

Well, this is a pretty typical question for us to investigate in medicine (I'm a physician) - "I've noticed that X seems to indicate Y". The standard approach is to carefully record X and Y, and then look at whether a pattern can be found in X which reliably indicates Y. So without knowing anything about how Wiseman or Sheldrake approached the question, my first thought would be to use a standard approach. The parents say that they can tell by the behaviour of the dog when Pam is on her way home. So I would record the behaviour of the dog and mark when Pam is on her way home. I might start by asking the parents what behaviour they would look for, but if that was unsuccessful, then I'd look for different patterns in the data to see if I could find a criteria which would reliably indicate that Pam was on her way home (it doesn't have to be 100%). The formal description of this process is ROC curve analysis.

What about that would you regard as invalid?

What spin? I have to cut this quote mid-paragraph because you say there is a spin, suggesting explicitly that Sheldrake is the one being dishonest. I have looked at the actual data and Sheldrake is right.

You cant just say that it was sheldrake and proponents that put a spin on it without being specific. Vague criticisms are not valid.

I can give you the specific criticisms. My pessimism may be unwarranted. :)

From the data on the dog experiment it is quite clear that Wiseman is guilty of shenanigans.

To you. A non-proponent researcher would probably see it as "Wiseman performed the experiment I would have performed, and it was negative". If they saw Sheldrake's data, they'd probably suspect that it would give the same negative result if run through the same analysis (looking at fig. 4, can you tell if Pam is coming home or not at each data point?).

Parapsychologists have been responding to VALID criticisms for many decades. The evolution of analysis and experiments makes this pretty clear.

Agreed. Just not all of the valid criticisms.

Linda
 
I am not a zealot, if that is what you are asking.

No, that's not what I am asking. I asking whether you believe "creation science" is a science or pseudoscience?

However, I fail to see the resemblance between Jim Kennedy's criticism of the methodology (historically, he has noted a posture against MAs in general) of Parapsychology and believing that dinosaur fossils were planted.

The resemblance is that Kennedy attributes the unsustainability and evasiveness of psi as being guided by a "higher consciousness" (presumably God) do induce a sense of awe and wonder by deliberately baffling us.

The most obvious, empirically grounded model for the unsustainable nature of psi is that the primary function of psi is to induce and maintain a sense of mystery and wonder. To fulfill this purpose, psi effects
must remain mysterious and unsustainable. William James’s (1909/1960) comment that psi appears to be intended to be baffling would be precisely correct. (source: "The Capricious, Actively Evasive, Unsustainable Nature of Psi: A Summary and Hypotheses" by J.E. Kennedy)

What was James's comment (that Kennedy considers to be precisely correct)?

I have also spent a good many hours in witnessing . . . phenomena. Yet I am theoretically no “further” than I was at the beginning; and I . . . have been tempted to believe that the Creator has eternally intended this department to remain baffling, to prompt our hopes and suspicions all in equal measure, so that, although ghosts and clairvoyances, and raps and messages from spirits, are always seeming to exist and can never be fully explained away, they also can never be susceptible to full corroboration. (James, 1909/1960, p. 310)*

* James, W. (1960). The final impressions of a psychical researcher. In G. Murphy & R. D. Ballou (Eds.),
William James on psychical research (pp. 309–325). New York: Viking. (Original work published 1909)
 
Well, this is a pretty typical question for us to investigate in medicine (I'm a physician) - "I've noticed that X seems to indicate Y". The standard approach is to carefully record X and Y, and then look at whether a pattern can be found in X which reliably indicates Y. So without knowing anything about how Wiseman or Sheldrake approached the question, my first thought would be to use a standard approach. The parents say that they can tell by the behaviour of the dog when Pam is on her way home. So I would record the behaviour of the dog and mark when Pam is on her way home. I might start by asking the parents what behaviour they would look for, but if that was unsuccessful, then I'd look for different patterns in the data to see if I could find a criteria which would reliably indicate that Pam was on her way home (it doesn't have to be 100%). The formal description of this process is ROC curve analysis.

What about that would you regard as invalid?

To you. A non-proponent researcher would probably see it as "Wiseman performed the experiment I would have performed, and it was negative". If they saw Sheldrake's data, they'd probably suspect that it would give the same negative result if run through the same analysis (looking at fig. 4, can you tell if Pam is coming home or not at each data point?).

Linda

I have already explicitly described the flaw in his criteria for falsification which you have not addressed.

Trying to label me a proponent doesn't justify your reasoning, because faulty falsification criteria is faulty regardless of what one thinks of psi.
 
I think Dillinger has challenged positions held at both ends of the spectrum.

I looked up Hobbiton and the internet says it's a fictional village created for LOTR (which I haven't seen).
I feel that I've been duped. You're probably not even a real Kiwi. Hell, that's probably not even your real picture.
 
I have already explicitly described the flaw in his criteria for falsification which you have not addressed.

Except he didn't "create a criteria for failure that made no sense--if the dog went to th window for absolutely no reason when the owner was not coming home, the trial was considered a failure". Instead he did what I described as what would be regarded as a valid approach, were it to take place in a field other than parapsychology and return negative results.

He chose a criterion for success in each trial, based on the observations of Pam's parents. And each trial was a success - in each trial, the dog exhibited the behaviour which Pam's parents were looking for. The problem was that this behaviour did not indicate when Pam was actually coming home, which suggested that Pam's parents had been mistaken about whether or not the dog's behaviour changed in an observable way when Pam was coming home.

And it wasn't the case that Wiseman's criterion for success was poor, and that a different criterion would have indicated Pam was on her way home. Even combing through the data post hoc doesn't help you to find a criterion which performs any differently.

Trying to label me a proponent doesn't justify your reasoning, because faulty falsification criteria is faulty regardless of what one thinks of psi.

My point was that nobody who uses this process in every other social science field thinks that it is faulty. This complaint seems to be levied solely by proponents who are trying to justify the character assassination of Wiseman. Even within parapsychology, when the same process is used by other parapsychologists for other experiments, it doesn't get treated as faulty (e.g. http://sgo.sagepub.com/content/1/2/2158244011420451). Why is it only "extremely flawed" and "dishonest" in this one case, but not in any other, other than prejudice based on belief?

Linda
 
Except he didn't "create a criteria for failure that made no sense--if the dog went to th window for absolutely no reason when the owner was not coming home, the trial was considered a failure". Instead he did what I described as what would be regarded as a valid approach, were it to take place in a field other than parapsychology and return negative results.

He chose a criterion for success in each trial, based on the observations of Pam's parents. And each trial was a success - in each trial, the dog exhibited the behaviour which Pam's parents were looking for. The problem was that this behaviour did not indicate when Pam was actually coming home, which suggested that Pam's parents had been mistaken about whether or not the dog's behaviour changed in an observable way when Pam was coming home.

And it wasn't the case that Wiseman's criterion for success was poor, and that a different criterion would have indicated Pam was on her way home. Even combing through the data post hoc doesn't help you to find a criterion which performs any differently.



My point was that nobody who uses this process in every other social science field thinks that it is faulty. This complaint seems to be levied solely by proponents who are trying to justify the character assassination of Wiseman. Even within parapsychology, when the same process is used by other parapsychologists for other experiments, it doesn't get treated as faulty (e.g. http://sgo.sagepub.com/content/1/2/2158244011420451). Why is it only "extremely flawed" and "dishonest" in this one case, but not in any other, other than prejudice based on belief?

Linda

You still didn't address the explicit reason why I said the criteria was no good.
 
You still didn't address the explicit reason why I said the criteria was no good.
Because you didn't say why the criteria were no good. Pam's parents said they knew Pam was coming home if the dog started going to or spending more time at the window for no obvious reason. Why wouldn't that be what you would look for in the data?

Linda
 
Because you didn't say why the criteria were no good. Pam's parents said they knew Pam was coming home if the dog started going to or spending more time at the window for no obvious reason. Why wouldn't that be what you would look for in the data?

Linda

This is what I posted earlier:

Neil said:
Wiseman is the one that said dogs have super senses so maybe hear the owner coming home well before any of the people could hear it, giving the impression of telepathy. If the dog's senses are this acute, then how on earth could Wiseman justify that the dog went to the door for "no reason"? Wiseman, with his relatively poor senses, wouldn't detect subtle sounds outside that the dog could have been responding to.
 
This is what I posted earlier:

"Wiseman is the one that said dogs have super senses so maybe hear the owner coming home well before any of the people could hear it, giving the impression of telepathy. If the dog's senses are this acute, then how on earth could Wiseman justify that the dog went to the door for "no reason"? Wiseman, with his relatively poor senses, wouldn't detect subtle sounds outside that the dog could have been responding to."

So how does this make the criteria "no good"? The same concern would apply to Pam's parents' judgement about why the dog was going to the window in the first place.

What criteria would you consider reasonable?

Linda
 
"Wiseman is the one that said dogs have super senses so maybe hear the owner coming home well before any of the people could hear it, giving the impression of telepathy. If the dog's senses are this acute, then how on earth could Wiseman justify that the dog went to the door for "no reason"? Wiseman, with his relatively poor senses, wouldn't detect subtle sounds outside that the dog could have been responding to."

So how does this make the criteria "no good"? The same concern would apply to Pam's parents' judgement about why the dog was going to the window in the first place.

What criteria would you consider reasonable?

Linda

Pam's parents weren't conducting a scientific study that is to control for confounding variables!

I've already said multiple times and even in the quote of mine I just posted that Wiseman has no way of knowing when the dog went to the door for "no reason." His criteria for a failure was if the dog went to the window for no reason when Pam wasn't coming home, but he cannot say if it was for no reason because of the acuity of the dog's senses.

Sheldrake's took a simple statistical approach showing the dog was waiting at the window far more once Pam decided to go home, and Wiseman's data replicated this finding.

Yet Wiseman still fraudulently claims he debunked this.
 
So you want to tell me that they are "p-hacking to death" but I am supposed to then accept that you say it could be almost "impossible to detect?" If you can't demonstrate or quantify this in some way, then it is an invalid criticism.

Apparently, by chopping up my comments into so many subsentences you have lost track of the meaning. I said that some forms of p-hacking can be detected, but others not. As I have said numerous times before in this forum, Bem's Feeling the Future paper was the result of blatant p-hacking; its obviousness was pointed out in several qualitative analyses and was confirmed quantitatively by Greg Francis. Radin's chocolate and tea papers are case studies in abuse of researcher degrees of freedom. In other cases, where the p-hacking cannot be observed, p-hacking is not a criticism at all. It's a probabilistic conclusion based on the prior odds of p-hacking vs psi accounting for the positive results. These prior odds are informed by the myriad confirmed instances of p-hacking, intentional omission of data, and outright fraud in the field.
 
Apparently, by chopping up my comments into so many subsentences you have lost track of the meaning. I said that some forms of p-hacking can be detected, but others not. As I have said numerous times before in this forum, Bem's Feeling the Future paper was the result of blatant p-hacking; its obviousness was pointed out in several qualitative analyses and was confirmed quantitatively by Greg Francis. Radin's chocolate and tea papers are case studies in abuse of researcher degrees of freedom. In other cases, where the p-hacking cannot be observed, p-hacking is not a criticism at all. It's a probabilistic conclusion based on the prior odds of p-hacking vs psi accounting for the positive results. These prior odds are informed by the myriad confirmed instances of p-hacking, intentional omission of data, and outright fraud in the field.

And I am to trust prior odds? Prior odds are usually unjustified, and a good way to make small to medium effect sizes disappear. Prior odds are notorious for being biased and not taking all factors into account.

I was discussing the autoganzfeld experiments, so whether or not Bem p-hacked his set of experiments isn't relevent to what I was bringing up. BUT, could you link to the paper you mentioned about the Bem experiments? I would be interested in reading it.
 
"Fraudulently"? Seriously?

Fraud:

a person or thing intended to deceive others, typically by unjustifiably claiming or being credited with accomplishments or qualities.

Wiseman was in the media making the claims and self-promoting based on the false claims, even after it was pointed out that he replicated Sheldrake's work.

I don't know how much more clear it could be.
 
And I am to trust prior odds?

"Trust" prior odds? What are you talking about? Prior odds is what you use to make rational judgments in light of new data. It's up to you whether you use prior odds or not. However, they are required for rational judgment about experimental results.

Prior odds are usually unjustified, and a good way to make small to medium effect sizes disappear. Prior odds are notorious for being biased and not taking all factors into account.

As opposed to just considering the experiment in isolation, which takes no other relevant information into account at all. That would sure work well.

I was discussing the autoganzfeld experiments, so whether or not Bem p-hacked his set of experiments isn't relevent to what I was bringing up. BUT, could you link to the paper you mentioned about the Bem experiments? I would be interested in reading it.

http://link.springer.com/article/10.3758/s13423-012-0227-9

When I click on the "download pdf" button, I get the pdf, but that might be because I have access to it by an institutional subscription. If you can't download the paper, I can email you a copy if you PM me your email address.
 
Pam's parents weren't conducting a scientific study that is to control for confounding variables!

Right, that was what Wiseman and Sheldrake were doing. But the idea that the dog was telepathic came from Pam's parents in the first place. If they were mistaken, was there any reason to think the dog was telepathic?

I've already said multiple times and even in the quote of mine I just posted that Wiseman has no way of knowing when the dog went to the door for "no reason." His criteria for a failure was if the dog went to the window for no reason when Pam wasn't coming home, but he cannot say if it was for no reason because of the acuity of the dog's senses.

Does that matter? Looking at the "no obvious reason" visits was just a way to reduce some of the noise in the experiment. But regardless of whether or not there was still some noise there, there wasn't any sort of alternative pattern which confirmed the parents' impression. All you can conclude is that Pam's parents were mistaking "noise" for telepathy.

Sheldrake's took a simple statistical approach showing the dog was waiting at the window far more once Pam decided to go home, and Wiseman's data replicated this finding.

Except that this finding would be present regardless of whether or not the dog was telepathic if the dog simply spent more time waiting at the window the longer the Pam was gone. And Sheldrake showed that this was the case (figure 4). Yes, Wiseman's data also showed this pattern. But since he showed that merely "spending more time waiting at the window" didn't indicate that the owner was coming home, it shows that Sheldrake's findings didn't depend upon telepathy either.

Yet Wiseman still fraudulently claims he debunked this.

Wiseman demonstrated that Pam's parents were mistaken about whether the dog was telepathic. How is that "fraudulent"?

Linda
 
As I have said numerous times before in this forum, Bem's Feeling the Future paper was the result of blatant p-hacking; its obviousness was pointed out in several qualitative analyses and was confirmed quantitatively by Greg Francis.

Well, I did start a thread to discuss the question of whether Bem's experiments were exploratory in this sense:
http://www.skeptiko-forum.com/threads/was-bems-feeling-the-future-paper-exploratory.1561/

It seems to me that the accusation of "p-hacking" really depends on an assumption that Bem was lying when he said that he had a pre-defined hypothesis for each experiment. As discussed in that thread, there is an indication that there were alternative hypotheses in one of the studies. But for the other eight, I don't see that the accusation has been made out at all.
 
Well, I did start a thread to discuss the question of whether Bem's experiments were exploratory in this sense:
http://www.skeptiko-forum.com/threads/was-bems-feeling-the-future-paper-exploratory.1561/

It seems to me that the accusation of "p-hacking" really depends on an assumption that Bem was lying when he said that he had a pre-defined hypothesis for each experiment. As discussed in that thread, there is an indication that there were alternative hypotheses in one of the studies. But for the other eight, I don't see that the accusation has been made out at all.

No. To an experienced data analyst, it's blatantly obvious that he was picking among alternative data analyses to support whatever hypotheses he had, prior or not, and this is supported by Francis's statistical analysis, an analysis that Andrew Gelman criticized as being superfluous in light of what is so obvious to the naked eye.
 
Back
Top