"extraordinary claims require extraordinary evidence" and is this applied equally ?

Bart V

straw materialist
Member
I am taking a thread that is going on in a restricted sub forum to here, partly to escape the restrictions over there,
partly to allow restricted posters to chime in, but mostly because i do not only want to discuss ECREE.
I also would like to examine the question if there are claims in mainstream science that are accepted on the basis of equally poor evidence.
 
Last edited:
Your question makes no sense.
No less then your question.
If you are going to apply ECREE to science then you have to have a measurable goal.
See what i mean?
Exactly what amount of evidence is enough to meet this standard of "extraordinary?" If you don't answer that question, you're not doing science.
That is a better question, the evidence for a claim that is not based in theory should at least be large enough to proceed from there.
There, IMO, lays the problem with all psi claims. The effects are or to small, or to hard to get to, to really do something with them.
This also means we do not have to arbitrarily draw a line. If effects are larger, the field will automatically evolve to become more about how it works rather than if it exists.

as it is now, parapsychology is stuck in the phase where it tries to establish itself.
 
I also would like to examine the question if there are claims in mainstream science that are accepted on the basis of equally poor evidence.

Let me quote what Michael and I both wrote in the other thread so others don't have to bounce inbetween:

For dark matter/energy, there's not even direct phenomenological evidence (look around: point some out, or even demonstrate in the laboratory its effects). It only "exists" as a means to explain such things as the rotation speed of galaxies. Yet it is a widely accepted, mainstream claim: one of the most extraordinary ever made, as it conjures up 96% extra stuff in the universe.

Personally, I think it's complete bullshit and an artifact of a view of physics that imputes unwarranted influence to gravity. There's much better phenomenological evidence for electrical influences in the universe that might account for such things as "quasars", "black holes", "dark matter" and so on.

What is accepted in science is often dependent on theoretical underpinnings rather than empirical evidence. "Extraordinary claims" are often asserted in relation to those theoretical underpinnings. In Ptolemaic astronomy, the underpinning was geocentricity, and it's notable that for quite some time, it was more accurate in terms of its predictions for planetary movements than was heliocentricity, even though it too was bullshit.

What if dark matter/energy stands in relation to galactic rotation speeds as geocentrism did to what was interpreted as Ptolemaic epicycling (i.e. the occasional observed retrograde motion of the planets)? What if cosmology turns around and accepts a much greater influence for electrical forces in the universe? Changes in scientific paradigms are hardly unknown.

This is turning scientific methodology on its head. You're talking about a recent philosophical shift in science, not the scientific method itself. Science has always been driven by experiment, with no need of theory until later on. Any scientific finding can be accepted, if deemed experimentally rigorous, without theory. Superconductivity was accepted without theory for 40-50 years. This is but one example. Michael's reference to cosmological ideas is also highly relevant. Also the well-accepted idea that life exists in other parts of the universe is purely a statistical argument with no direct observational evidence whatsoever.

Let me add further that investigation, acceptance, and follow up on the discovery of mysterious discharge of heat from Radium by Pierre Curie was not put on hold simply because theory could not and would not explain it for decades.

Regards,
John
 
Let me requite what Michael and I both wrote in the other forums so others don't have to bounce inbetween:





Let me add further that investigation and follow up on the discovery of mysterious discharge of heat from Radium by the Curries was not put on hold simply because theory could not and would not explain it for decades.

Regards,
John
John, this is the skeptic zone where they get to say whatever they want to regardless of the evidence. The skeptics were told to stay on this sub forum and go nowhere else so that we don't have to have these endless arguments about resolved topics everywhere on the forum.
 
John, this is the skeptic zone where they get to say whatever they want to regardless of the evidence. The skeptics were told to stay on this sub forum and go nowhere else so that we don't have to have these endless arguments about resolved topics everywhere on the forum.

Fair enough Craig. I won't burn the midnight oil debating about it then.
 
Let me quote what Michael and I both wrote in the other thread so others don't have to bounce inbetween:





Let me add further that investigation, acceptance, and follow up on the discovery of mysterious discharge of heat from Radium by Pierre Curie was not put on hold simply because theory could not and would not explain it for decades.

Regards,
John
I have to point out the evidence for a mysterious discharge from radium was far from ambiguous. Unlike psi evidence.
 
I have to point out the evidence for a mysterious discharge from radium was far from ambiguous. Unlike psi evidence.

OK but Bart V said that any evidence not accompanied by theory had to be "extraordinary". You seem to be saying that radium is more repeatable than psi effects. True. Is that what makes it "extraordinary"? You and Bart V seem to be defining "extraordinary" in different ways. This is what Craig raised attention to; there is no consistency here, ECREE is far too subjective and arbitrary. With that said lack of reproducibility in a developing science is an asinine reason for denying its reality. Pretty sure the first successful sheep clone required 250 tries. We're dealing with biological systems (people) in psi research. Complex biological systems do not always yield clean, repeatable data test in test out. Also do you deny that life exists somewhere else in the universe, just because that is a statistical argument?

Regards.
 
Don't believe Craig. Even here we can get censored.
Really? I didn't know that. I thought that you guys had free reign on this sub forum as long as you weren't doing the kind of stuff that gets people banned everywhere. You know, insulting behavior and trolling.
 
It's always puzzled me how Alex will disagree with the ECREE maxim. How can he do that? That's how every one of us lives most of our lives every day.

It's good to see that Aaron Moritz, who did the blog post under discussion in that other thread, doesn't dismiss ECREE, or Bayesian analysis, he just takes exception with its usefulness for psi because he doesn't think we can get a handle on the prior probabilities. But you HAVE to deal with it, you can't avoid it.

I've always viewed the ECREE maxim as a nice pithy way of explaining Bayesian analysis. Before seeing the results of some data on a subject, you must already have some estimate of its likelihood to be true - this is called prior probability. Then you evaluate the new data and it shifts your prior esitimate, so you get a new probability. This can then be used as the prior probability for the next set of data you evaluate.

My 17 year old son is taking AP statistics this year. When he first started the fall semester, I gave him the paradoxical-sounding question about a test for HIV that's 90% accurate, but the prevalence of HIV in the population is only 1%. You take a random person and give him the test, and it shows positive. What is the probability that he actually has HIV? Most people think it's 90%, but it's really only about 10% because of the prior probability of 1%.

Then just about a week later he (my son) took an extra-curricular math test, and the only question he missed was this: a surfboard factory produces surfboards with a defect rate of one on a thousand. There is a simple test performed that is 90% accurate (10% of the time it will tell you a defective board is good, and 10% of the time it will tell you a good board is defective). You select a board and it fails the test. What's the probability that the board was actually defective?

It's the same form as the HIV question I had shared with him recently before that, but he still missed it. I took that opportunity to then explain about Bayesian analysis, ECREE, and asked him, if you didn't already know that the factory had a 0.1% defect rate, how you would estimate the probability, and I explained that you can't make any estimate without a prior estimate of probability.

That's the key here - you have to use ECREE, you have to do a Bayesian analysis, and you have to come up with a prior probability. Otherwise, you can't say anything about the results you get. So I completely reject anyone's claim that ECREE doesn't apply to psi research.
 
Really? I didn't know that. I thought that you guys had free reign on this sub forum as long as you weren't doing the kind of stuff that gets people banned everywhere. You know, insulting behavior and trolling.
I've been told not to post in a thread in this sub-forum, for making dismissive remarks about people who are not even members here.
 
Really? I didn't know that. I thought that you guys had free reign on this sub forum as long as you weren't doing the kind of stuff that gets people banned everywhere. You know, insulting behavior and trolling.
If insulting behavior is an issue, then there are proponents who need to be banned.

~~ Paul
 
I've been told not to post in a thread in this sub-forum, for making dismissive remarks about people who are not even members here.
If insulting behavior is an issue, then there are proponents who need to be banned.

~~ Paul
I've certainly seen situations where proponents have gotten snippy and insulting, so while I'm not sure that what I've seen is worth banning someone over, I'm not going to argue with that. (Athough you might have noticed that proponent Tyler Snotgern got his ass banned for obnoxious behavior.)

I personally find that I have to constantly make sure that I don't go down that path. (I don't always succeed.) It's never a good idea to insult other people and I try to take care to not go overboard in my criticism of other people's actions either. I also try to soften it by stating explicitly when it's just my opinion as well.
 
It's always puzzled me how Alex will disagree with the ECREE maxim. How can he do that? That's how every one of us lives most of our lives every day.

It's good to see that Aaron Moritz, who did the blog post under discussion in that other thread, doesn't dismiss ECREE, or Bayesian analysis, he just takes exception with its usefulness for psi because he doesn't think we can get a handle on the prior probabilities. But you HAVE to deal with it, you can't avoid it.

I've always viewed the ECREE maxim as a nice pithy way of explaining Bayesian analysis. Before seeing the results of some data on a subject, you must already have some estimate of its likelihood to be true - this is called prior probability. Then you evaluate the new data and it shifts your prior esitimate, so you get a new probability. This can then be used as the prior probability for the next set of data you evaluate.

It'd help if you termed them properly as posteriors. Most people here know of Bayesian statistics, so you don't need to explain it much.

My 17 year old son is taking AP statistics this year. When he first started the fall semester, I gave him the paradoxical-sounding question about a test for HIV that's 90% accurate, but the prevalence of HIV in the population is only 1%. You take a random person and give him the test, and it shows positive. What is the probability that he actually has HIV? Most people think it's 90%, but it's really only about 10% because of the prior probability of 1%.

Then just about a week later he (my son) took an extra-curricular math test, and the only question he missed was this: a surfboard factory produces surfboards with a defect rate of one on a thousand. There is a simple test performed that is 90% accurate (10% of the time it will tell you a defective board is good, and 10% of the time it will tell you a good board is defective). You select a board and it fails the test. What's the probability that the board was actually defective?

It's the same form as the HIV question I had shared with him recently before that, but he still missed it. I took that opportunity to then explain about Bayesian analysis, ECREE, and asked him, if you didn't already know that the factory had a 0.1% defect rate, how you would estimate the probability, and I explained that you can't make any estimate without a prior estimate of probability.

That's the key here - you have to use ECREE, you have to do a Bayesian analysis, and you have to come up with a prior probability. Otherwise, you can't say anything about the results you get. So I completely reject anyone's claim that ECREE doesn't apply to psi research.

Bayesian analysis is not the gold standard for statistics, though. It seems the field is equally divided between standard hypothesis testing and Bayesian statistics. You'd need to first make the case that Bayesian statistics is the gold standard ( if you did, I'd be happy to recognize your Nobel Prize ), and why it's more superior to standard hypothesis testing for psi. If you're simply rejecting the statistics based on psi because you feel Bayesian statistics is superior to hypothesis testing, then don't you think your efforts are better served criticizing high-cost studies that use hypothesis testing?
 
For dark matter/energy, there's not even direct phenomenological evidence (look around: point some out, or even demonstrate in the laboratory its effects). It only "exists" as a means to explain such things as the rotation speed of galaxies. Yet it is a widely accepted, mainstream claim: one of the most extraordinary ever made, as it conjures up 96% extra stuff in the universe.
But the gravitational effect is undeniable, extraordinary, evidence.
What it means is hotly debated, dark matter is seen as the best candidate by many, but it is also seen by some as a placeholder name.

Personally, I think it's complete bullshit and an artifact of a view of physics that imputes unwarranted influence to gravity. There's much better phenomenological evidence for electrical influences in the universe that might account for such things as "quasars", "black holes", "dark matter" and so on.
That might be a possibility, but electrical influences also need to explain the rather good phenomenological evidence.
If the evidence for psi would be as good as the evidence for the gravitational influence of what we now call dark matter, we would not be having this discussion, no matter if we knew what it meant or how it worked.

What is accepted in science is often dependent on theoretical underpinnings rather than empirical evidence. "Extraordinary claims" are often asserted in relation to those theoretical underpinnings. In Ptolemaic astronomy, the underpinning was geocentricity, and it's notable that for quite some time, it was more accurate in terms of its predictions for planetary movements than was heliocentricity, even though it too was bullshit.
The observations were explained incorrectly.
But again, if we could observe and predict psi occurring with only a fraction of the accuracy of the behaviour of the planets we would have extraordinary evidence.

What if dark matter/energy stands in relation to galactic rotation speeds as geocentrism did to what was interpreted as Ptolemaic epicycling (i.e. the occasional observed retrograde motion of the planets)? What if cosmology turns around and accepts a much greater influence for electrical forces in the universe? Changes in scientific paradigms are hardly unknown.
Yes, but asking for a paradigm shift without proper evidence or theory is special pleading.
Let me indulge on a prediction, I will predict you that we will have a far better understanding about dark matter, maybe even (in)direct observation, before we have better evidence for psi.
Now let us hope we live long enough to know whether I am right or wrong.
 
Thanks for starting this thread, Bart_V.

When you bring up Bayesian analysis, attention tends to be placed on the prior probability or extra-ordinariness of the idea. This rarely goes anywhere. But what you can do instead with Bayesian analysis is look at the extra-ordinariness of the evidence (Bayes' Factor). And the Bayes' Factor is independent of the prior probability or any subjective impressions of the likeliness of an idea, and so it gets rid of concerns about whether or not estimates of likeliness are too harsh or too generous.

So rather than asking "is this an extra-ordinary idea?", you can ask "do we have extra-ordinary evidence?" Then comparisons can be made between ideas on the basis of the extra-ordinariness of the evidence which supports the idea, not on the basis of the extra-ordinariness of the idea.

Linda
 
Can a proponent of ECREE please clarify what "extraordinary evidence" means. Unless there is a formal agreed upon definition this conversation (in all its dimensions) is meaningless. Bart V says it requires BOTH phenomenology (above background) and theory. Obviously the latter requirement (theory) is nonsense (the scientific method and/or declaration of a scientific fact does not require theory until much later), but obviously the former (above background phenomenology) is a reasonable requirement.

So how much above background does an effect (phenomenon) have to be to constitute "extraordinary"? What would satisfy someone like Wiseman who has stated that by all other standards of science psi is proven? There are professional statisticians like Dr. Utts who are wholly satisfied with the evidence for psi. What is the "new standard" (specifically)? I feel many proponents are perpetually vague on this point. Also demand for 100% repeatability, as a qualification for "extraordinary", is a red herring when an experiment involves complex chemical, mechanical, or biological processes/subjects. Countless phenomenon have been deemed legitimate and worthy of further elucidation without high reapeatability throughout the 20th century.

But the gravitational effect is undeniable, extraordinary, evidence.

Dark Matter and/or Dark Energy are not "gravitational effect" in the conventional sense at all. Dark Matter seems to produce gravitational effects (you're confusing cause with effect), and Dark Energy can be characterized as "negative pressure" (i.e. the reverse of what we concieve to be gravity). They are not directly measurable or detectable, they are nebulous postulates, unknown/mysterious mechanisms used to satisfy cosmological accounting practices. Your point is a non sequitur.

But again, if we could observe and predict psi occurring with only a fraction of the accuracy of the behaviour of the planets we would have extraordinary evidence.

How do you define a "fraction" in this context? A fraction of 100%? How high of a fraction do you need? "Proof" of psi phenomenon is certainly satisfied if your requirements are this broad and undefined. Once again this is an example of the problem with many of these debates (i.e. misdirection through vagueries). You can't compare the predictability of a simple 2-body system in vacua with labwork involving complex biological subjects without careful qualification. Its apples and oranges to an extreme degree.

Yes, but asking for a paradigm shift without proper evidence or theory is special pleading.

You have still not defined what "proper evidence" is, and your demands for theory as grounds for legitimacy are ahistorical and unjustified in regards to the scientific method as its been practiced for over a century.

Regards,
JM
 
Last edited:
You'd need to first make the case that Bayesian statistics is the gold standard ( if you did, I'd be happy to recognize your Nobel Prize ), and why it's more superior to standard hypothesis testing for psi. If you're simply rejecting the statistics based on psi because you feel Bayesian statistics is superior to hypothesis testing, then don't you think your efforts are better served criticizing high-cost studies that use hypothesis testing?

I'm not suggesting that studies be done any differently. I'm suggesting that when evaluating the results of a study, everyone uses a Bayesian-like method, even informally, whether they think they do or not. A new study gives you new data to adjust your prior probability estimate. Sorry that wasn't clear.

There's just no other way to do it.
 
Once again, the skeptics drag out Bayesian statistics in their defense, which is ridiculous because used properly, this statistical method supports the evidence for psi.

The skeptics are usually referring to the Wagenmakers, Wetzels, Borsboom, and van der Maas (2011) paper. But this was an example of statistical abuse. This was clearly demonstrated in this rebuttal: This is actually pretty readable without a background in statistics and worth the effort. Note the last line I have bolded. It's an instant classic.

http://deanradin.com/evidence/Bem2011.pdf

To perform a Bayesian analysis, one must specify two different types of prior belief. The first and most familiar is the prior odds that H0 rather than H1 is true. It is here that Wagenmakers et al. (2011) formally

716expressed their prior skepticism about the existence of psi by setting these odds at 99,999,999,999,999,999,999 to 1 in favor of H0. Specifying this type of prior belief gives deniers, believers, and everyone in between the opportunity to express an explicit opinion before taking the data into account.

The second prior belief that must be specified is more compli- cated and not widely known to those unfamiliar with the details of Bayesian analysis. This is the explicit specification of a probability distribution of effect sizes across a range for both H0 and H1. Specifying the effect size for H0 is simple because it is a single value of 0, but specifying H1 requires specifying a probability distribution across a range of what the effect size might be if H1 were in fact true.

Accordingly, our critique of Wagenmakers et al.’s (2011) analysis is that their choice of H1 was unrealistic. In particular, they assumed that we have no prior knowledge of the likely effect sizes that the experiments were explicitly designed to detect. As Utts et al. (2010) argued,

It is rare that we have no information about a situation before we collect data. If we want to estimate the proportion of a community that is infected with HIV, do we really believe it is equally likely to be anything from 0 to 1? If we want to estimate the mean change in blood pressure after 10 weeks of meditation, do we really believe it could be anything from to ? Even the choice of what hypotheses to test, and whether to make them one-sided or two-sided is an illustration of using prior knowledge. (p. 2)

In general, we know that effect sizes in psychology typically fall in the range of 0.2 to 0.3. A survey of “one hundred years of social psychology” that cataloged 25,000 studies of eight million people yielded a mean effect size (r) of .21 (Richard, Bond, & Stokes- Zoota, 2003). An example relevant to Bem’s (2011) retroactive habituation experiments is Bornstein’s (1989) meta-analysis of 208 mere exposure studies, which yielded an effect size (r) of .26.

We even have some knowledge about previous psi experiments. The Bayesian meta-analysis of 56 telepathy studies, cited above, revealed a Cohen’s h effect size of approximately 0.18 (Utts et al., 2010), and the meta-analysis of 38 presentiment studies, also cited above, yielded a mean effect size of 0.28 (Mossbridge et al., 2011).

Consequently, no reasonable observer would ever expect effect sizes in laboratory psi experiments to be greater than 0.8—what Cohen (1988) terms a large effect. Cohen noted that even a medium effect of 0.5 “is large enough to be visible to the naked eye” (p. 26). Yet the prior distribution for H1 that Wagenmakers et al. (2011) adopted places a probability of .57 on effect sizes that equal or exceed 0.8. It even places a probability of .06 on effect sizes exceeding 10. If effect sizes were really that large, there
would be no debate about the reality of psi. Thus, the prior distribution Wagenmakers et al. placed on the possible effect sizes under H1 is wildly unrealistic.


Their unfortunate choice has major consequences for their con- clusions about Bem’s data. Whenever the null hypothesis is sharply defined but the prior distribution on the alternative hypoth- esis is diffused over a wide range of values, as it is in the distribution adopted by Wagenmakers et al. (2011), it boosts the probability that any observed data will be higher under the null hypothesis than under the alternative. This is known as the Lindley–Jeffreys paradox: A frequentist analysis that yields strong evidence in support of the experimental hypothesis can be contra- dicted by a misguided Bayesian analysis that concludes that the same data are more likely under the null. Christensen, Johnson, Branscum, and Hanson (2011) discussed an analysis comparable to that of Wagenmakers et al., noting that “the moral of the Lindley– Jeffreys paradox is that if you pick a stupid prior, you can get a stupid posterior” (p. 60).
/QUOTE]
 
  • Like
Reactions: K9!
Back
Top