Is Caroline Watt misrepresenting Stargate research and Parapsychology research in this article?

#2
I was more surprised by her statement about publication bias. I know the publishing of negative results in parapsychology has been encouraged since the early 1970s so maybe she was talking about something more recent. Perhaps with the decline in the number of parapsychology journals, such a policy isn't feasible any more. I'll email her this evening and ask.
 
#3
I was more surprised by her statement about publication bias. I know the publishing of negative results in parapsychology has been encouraged since the early 1970s so maybe she was talking about something more recent. Perhaps with the decline in the number of parapsychology journals, such a policy isn't feasible any more. I'll email her this evening and ask.
Even with such a policy publication bias is still a risk (we know meta analyses have included unpublished studies, and Ganzfeld has shown that only a small number of studies added or removed can change the results from significant to non).

But it's a red herring. Given that there are plenty of other risks of bias present in the studies that require future research to address there really is no point belabouring publication bias because this is one of the easier ones to fix since the solution is so simple: preregistration. If future metanalyses only use preregistered studies there will be no issue as to publication bias of this type.
 
#4
Even with such a policy publication bias is still a risk (we know meta analyses have included unpublished studies, and Ganzfeld has shown that only a small number of studies added or removed can change the results from significant to non).

But it's a red herring. Given that there are plenty of other risks of bias present in the studies that require future research to address there really is no point belabouring publication bias because this is one of the easier ones to fix since the solution is so simple: preregistration. If future metanalyses only use preregistered studies there will be no issue as to publication bias of this type.
Publication bias has been ruled out as any form of factor towards the positive significant results in Parapsychology. I have never heard anyone say it would take a small amount of unpublished negative results to return to the null in the Ganzfeld (or any of the other main classes of research), usually it would take a very high amount of studies that would require a lot of man power and hours.

There's also always the assumption that unpublished results are negative, some unpublished studies have achieved positive significant results and Radin has mentioned how some independent researchers haven't published their positive results due to the reaction they fear they would get.

I also think you are talking like the risk of bias undermines the results, yet as Parapsychology experiments have improved in terms of quality, the evidence has not gone away.
 
#5
I was more surprised by her statement about publication bias. I know the publishing of negative results in parapsychology has been encouraged since the early 1970s so maybe she was talking about something more recent. Perhaps with the decline in the number of parapsychology journals, such a policy isn't feasible any more. I'll email her this evening and ask.
I think she was just being a bit misleading and disingenuous to be honest, you think she'd mention this policy since it's been going on for such a long time.

Also why didn't she mention the conclusion of say Jessica Utt's regarding Stargate?
 
#6
Probably, but these sort of articles are actually useful as they keep the topic open. erby's objections about the accuracy of the submarine model in one of the Stargate sessions actually got me reading more about the subject. Initially, I found the discrepancies odd and wondered about leaks that would allow them to "cheat", but as I read on the actual material the true oddity of the practice -that they don't claim to receive photograph-like images, but actually cite (and draw) a bunch of abstract concepts that are in the vicinity- was apparent. Back then I did not realize that 100% accuracy would be (perhaps even more) indicative of fraud, that is until I compared them to the equally irregular physiological field and figured that this involved similar monitoring of live subjects and hence a margin of error must be expected.

It is a shame that some of the involved decided to commercialize it, leading to hype and misrepresentation of what is otherwise a promising research area that should have stayed in the lab. On the other hand, if someone thinks that the military was stupid enough to fund any project for this long without results, they were not paying attention to Cold War mentality and how it affected spending. The budget (including the "black" part) was not an all-you-can-eat buffet and equally odd projects with similar cost (like Acoustic Kitty) were discontinued fairly quickly due to lack of results.
 
#7
Probably, but these sort of articles are actually useful as they keep the topic open. erby's objections about the accuracy of the submarine model in one of the Stargate sessions actually got me reading more about the subject. Initially, I found the discrepancies odd and wondered about leaks that would allow them to "cheat", but as I read on the actual material the true oddity of the practice -that they don't claim to receive photograph-like images, but actually cite (and draw) a bunch of abstract concepts that are in the vicinity- was apparent. Back then I did not realize that 100% accuracy would be (perhaps even more) indicative of fraud, that is until I compared them to the equally irregular physiological field and figured that this involved similar monitoring of live subjects and hence a margin of error must be expected.

It is a shame that some of the involved decided to commercialize it, leading to hype and misrepresentation of what is otherwise a promising research area that should have stayed in the lab. On the other hand, if someone thinks that the military was stupid enough to fund any project for this long without results, they were not paying attention to Cold War mentality and how it affected spending. The budget (including the "black" part) was not an all-you-can-eat buffet and equally odd projects with similar cost (like Acoustic Kitty) were discontinued fairly quickly due to lack of results.
I think some of the Stargate sessions were very accurate and impressive to be honest, obviously others not so much.

The big difference between the results selected/experienced viewers get vs novice viewers is suggestive of a real effect.

I agree it would be better in the lab, I've heard of a few people using remote viewing in the stock market and making a decent amount of money. Would be good to see more practical applications and to do more robust studies with selected individuals to boost power, effect size and replication rates.
 
#8
Publication bias has been ruled out as any form of factor towards the positive significant results in Parapsychology. I have never heard anyone say it would take a small amount of unpublished negative results to return to the null in the Ganzfeld (or any of the other main classes of research), usually it would take a very high amount of studies that would require a lot of man power and hours.
By "ruled out" I think you're referring to the models that have been advanced to try and calculate how many unpublished studies would need to be out there to shift the results. I've seen those studies and discussions. There is debate out there about the validity of those calculations, unfortunately my stats chops are not up to the task of evaluating them myself.

However, we have a simple, real world example that shows just how few studies were needed to change the results from non-significant to significant. Going off memory here but look at the Wiseman Ganzfeld metaanalysis that showed no-significance. The others took it, reanalyzed it, removed 8 or 9 studies that they deemed should not have been included, and voila! Significance!

So while I'm not able to properly evaluate the math I'm quite skeptical of the formulas that predict hundreds of studies are needed. And of course there are competing models that project much smaller numbers. People can argue til they are blue in the face reanalyzing old studies but to my focus is looking forward. Just preregister and the risk of that particular bias is virtually eliminated. .
There's also always the assumption that unpublished results are negative, some unpublished studies have achieved positive significant results and Radin has mentioned how some independent researchers haven't published their positive results due to the reaction they fear they would get.
I don't make those assumptions. But it highlights that there is a potential issue that pre-registration would completely solve.

I also think you are talking like the risk of bias undermines the results, yet as Parapsychology experiments have improved in terms of quality, the evidence has not gone away.
I don't look at it as undermining the results. I look at them as preliminary studies. It's not about the evidence going away it's about where to go from here. Preliminary studies are almost always replete with risks of bias because it takes a lot of resources to run the higher quality studies. The goal of the higher risk studies is to identify which hypotheses merit going further.

Yes, research methods in paraspychology have improved, but when we take a close look at studies we still see a lot of red flags. Sample size/power is a big one. Manner in which the methodology is set out in studies. Risks of internal selective reporting (ie: not determining which tests will be run in advance, doing multiple tests but not reporting all results). Not to mention the trickiest one, which is that it is exceptionally difficult to design an experiment that isolates psi as a control (in other words, we have to be careful that comparison to chance at best demonstrates that non-chance elements were involved, isolating what those elements are is very difficult). There are others.

None of what I mentions is an argument that psi was not involved - they are just some of the issues that need to be addressed in order to confidently consider psi to have been demonstrated.

Remember, these are issues that apply across the board in science. Most of the work determining the importance of these issues was done in fields outside of parapsychology. Recently, the importance of these findings have come to light in discussions around the replication crisis that has been in the news about psychology.

Remember as well that evaluating methodology must be done independently of the results. Our view of how sound the methodology was should not change based on the results of the experiment.
 
#10
That's a particularly stunning example! Also, note that the file drawer effect is not the only concern with pre-registration:


Irvin says that by having to state their methods and measurements before starting their trial, researchers cannot then cherry-pick data to find an effect once the study is over. “It’s more difficult for investigators to selectively report some outcomes and exclude others,” she says.
For some reason file-drawer gets all the attention by many on this forum. The issues go much deeper.

And its not about fraud. Deciding and declaring these things in advance help control for the unconcious way that scientists can bias their results. It is the scientific method that help scientists become more objective, not some particular personality characteristic that makes them less biased than the average person!
 
#11
By "ruled out" I think you're referring to the models that have been advanced to try and calculate how many unpublished studies would need to be out there to shift the results. I've seen those studies and discussions. There is debate out there about the validity of those calculations, unfortunately my stats chops are not up to the task of evaluating them myself.

However, we have a simple, real world example that shows just how few studies were needed to change the results from non-significant to significant. Going off memory here but look at the Wiseman Ganzfeld metaanalysis that showed no-significance. The others took it, reanalyzed it, removed 8 or 9 studies that they deemed should not have been included, and voila! Significance!

So while I'm not able to properly evaluate the math I'm quite skeptical of the formulas that predict hundreds of studies are needed. And of course there are competing models that project much smaller numbers. People can argue til they are blue in the face reanalyzing old studies but to my focus is looking forward. Just preregister and the risk of that particular bias is virtually eliminated. .


I don't make those assumptions. But it highlights that there is a potential issue that pre-registration would completely solve.



I don't look at it as undermining the results. I look at them as preliminary studies. It's not about the evidence going away it's about where to go from here. Preliminary studies are almost always replete with risks of bias because it takes a lot of resources to run the higher quality studies. The goal of the higher risk studies is to identify which hypotheses merit going further.

Yes, research methods in paraspychology have improved, but when we take a close look at studies we still see a lot of red flags. Sample size/power is a big one. Manner in which the methodology is set out in studies. Risks of internal selective reporting (ie: not determining which tests will be run in advance, doing multiple tests but not reporting all results). Not to mention the trickiest one, which is that it is exceptionally difficult to design an experiment that isolates psi as a control (in other words, we have to be careful that comparison to chance at best demonstrates that non-chance elements were involved, isolating what those elements are is very difficult). There are others.

None of what I mentions is an argument that psi was not involved - they are just some of the issues that need to be addressed in order to confidently consider psi to have been demonstrated.

Remember, these are issues that apply across the board in science. Most of the work determining the importance of these issues was done in fields outside of parapsychology. Recently, the importance of these findings have come to light in discussions around the replication crisis that has been in the news about psychology.

Remember as well that evaluating methodology must be done independently of the results. Our view of how sound the methodology was should not change based on the results of the experiment.
I don't have time to slice and quote haha so each number is responding to each point of yours in order!

1) The methods are generally pretty group, and the majority of different calculations done by different people, with different methods, have found the amount of unpublished negative results to bring the overall results back to null. Johann and Maaneli's paper is best on this subject, but effectively, there is nowhere near the amount of people doing this research to make the 'file drawer' effect even remotely tenable.

2) Exactly, the only meta-analysis of nine that didn't find a positive effect in the Ganzfeld was found to be flawed (the one you mentioned), and the flaws were in the negative but only took a few studies to be removed to go back to significance. Whereas to take the current literature from positive to null you need a huge amount of unpublished negative studies. In Wiseman's meta analysis I think he included a few process orientated studies instead of proof orientated, which you shouldn't do if you want to find out whether there is an effect or not.

3) What models project a smaller amount of unpublished results to return it to the null? Can you point me to them? The only reason there old studies keep being reanalysed and debating is because some people won't accept the evidence for psi. I mean Watt looked into the unpublished studies Parapsychologists had and a significant amount of them were positive, so the unpublished studies that were found were not supportive of the 'filedrawer effect'.

4) Pre-registering would be an improvement - if all studies were pre-reigstered and positive results continue to be found, will you change your mind?

5) I think Parapsychology has gone way past the 'preliminary studies' stage in times of proof, in terms of the mechanism you would be correct though.

6) Low power/sample size isn't a 'red flag' it just makes it harder to find an effect if the effect isn't that large. Increase the power and sample size and replication rates and potentially effect size will increase. Internal selective reporting is a potential issue in all of Science, I think a lot of the studies done have pretty effectively controlled for what at least appears to be psi, especially when you read some of the sessions in the Ganzfeld for example.

7) I agree that we can and should continue to improve Parapsychology research, however according to a normal Scientific standard psi has been established, the only reason the debate is still going on is because of what we're studying, if it was anything else it would have been accepted long ago.

8) These issues do exist across Science, however due to the higher scrutiny Parapsychology research is usually of a better quality then mainstream Psychology research, matches or beats it in terms of replication rates, and with much less resources, funding and manpower.

9) True, but as I'm sure you're aware better methodology and better experimental design in Parapsychology hasn't lead to a decline in results, and has usually not affected the results or lead to a slight incline.

Cheers!
 
#12
I think some of the Stargate sessions were very accurate and impressive to be honest, obviously others not so much.
Indeed. But frankly, if there wasn't a skeptic attack against the project, I would not have been interested in figuring out what they were claiming in the first place.

The inner workings are best illustrated in one of the Geller sessions (who, BTW, I personally dislike due to his "controversy sells" attention whoring) where the target was a drawing of firecrackers, which led to the assessment that it was "a noise producing cylinder". Naturally, something so abstract could have different interpretations (i.e. a drum) but the underlying concept was not inaccurate per se, only obscure and apparently open to the aspects that the viewer can identify. That was when I figured that debating the most intricate design features in a submarine was a tangent.

I don't have time to slice and quote haha so each number is responding to each point of yours in order!

1) The methods are generally pretty group, and the majority of different calculations done by different people, with different methods, have found the amount of unpublished negative results to bring the overall results back to null. Johann and Maaneli's paper is best on this subject, but effectively, there is nowhere near the amount of people doing this research to make the 'file drawer' effect even remotely tenable.

2) Exactly, the only meta-analysis of nine that didn't find a positive effect in the Ganzfeld was found to be flawed (the one you mentioned), and the flaws were in the negative but only took a few studies to be removed to go back to significance. Whereas to take the current literature from positive to null you need a huge amount of unpublished negative studies. In Wiseman's meta analysis I think he included a few process orientated studies instead of proof orientated, which you shouldn't do if you want to find out whether there is an effect or not.

3) What models project a smaller amount of unpublished results to return it to the null? Can you point me to them? The only reason there old studies keep being reanalysed and debating is because some people won't accept the evidence for psi. I mean Watt looked into the unpublished studies Parapsychologists had and a significant amount of them were positive, so the unpublished studies that were found were not supportive of the 'filedrawer effect'.

4) Pre-registering would be an improvement - if all studies were pre-reigstered and positive results continue to be found, will you change your mind?

5) I think Parapsychology has gone way past the 'preliminary studies' stage in times of proof, in terms of the mechanism you would be correct though.

6) Low power/sample size isn't a 'red flag' it just makes it harder to find an effect if the effect isn't that large. Increase the power and sample size and replication rates and potentially effect size will increase. Internal selective reporting is a potential issue in all of Science, I think a lot of the studies done have pretty effectively controlled for what at least appears to be psi, especially when you read some of the sessions in the Ganzfeld for example.

7) I agree that we can and should continue to improve Parapsychology research, however according to a normal Scientific standard psi has been established, the only reason the debate is still going on is because of what we're studying, if it was anything else it would have been accepted long ago.

8) These issues do exist across Science, however due to the higher scrutiny Parapsychology research is usually of a better quality then mainstream Psychology research, matches or beats it in terms of replication rates, and with much less resources, funding and manpower.

9) True, but as I'm sure you're aware better methodology and better experimental design in Parapsychology hasn't lead to a decline in results, and has usually not affected the results or lead to a slight incline.

Cheers!
Yeah... You will not convince him. All of this has been said to him in all sorts of manners possible and he always resets back to the "methodological issues" stance. Don't bother, he is looking right trough you and trying to entice a couple of lurkers with that wall of text.

Edit: Something that I wish would have been studied more thoroughly is how the qualities of the target affect the sessions. Compare for example how a hand-drawn picture and a photograph fare against each other, and in turn how static depictions fare against coordinates.
 
Last edited:
#13
I don't have time to slice and quote haha so each number is responding to each point of yours in order!

1) The methods are generally pretty group, and the majority of different calculations done by different people, with different methods, have found the amount of unpublished negative results to bring the overall results back to null. Johann and Maaneli's paper is best on this subject, but effectively, there is nowhere near the amount of people doing this research to make the 'file drawer' effect even remotely tenable.
I think we have a different opinion of that paper but again, I think the whole issue of file drawer is a red herring anyway. Do you agree that going forward only preregistered studies should be used? if so then we're not that far apart! :)

2) Exactly, the only meta-analysis of nine that didn't find a positive effect in the Ganzfeld was found to be flawed (the one you mentioned), and the flaws were in the negative but only took a few studies to be removed to go back to significance. Whereas to take the current literature from positive to null you need a huge amount of unpublished negative studies. In Wiseman's meta analysis I think he included a few process orientated studies instead of proof orientated, which you shouldn't do if you want to find out whether there is an effect or not.
I think you are missing my point about the significance of the 9! The point is that a couple handful of studies were enough to turn the results from one to the other. Look at the article that malf posted: speculation is fine but the best way to test whether it's a problem or not is to perform the experiments with and without the risk present and see if there are changes! This is a much more sound way to proceed.

But again, I'm not too concerned with file drawer anyway! Ganzfeld has bigger risks of bias that need to be addressed going forward! The fact that virtually the entire database is underpowered for the results is a big one. Many of the studies included are incredibly small with weird, non-declared in advanced, numbers of trials.

3) What models project a smaller amount of unpublished results to return it to the null? Can you point me to them? The only reason there old studies keep being reanalysed and debating is because some people won't accept the evidence for psi. I mean Watt looked into the unpublished studies Parapsychologists had and a significant amount of them were positive, so the unpublished studies that were found were not supportive of the 'filedrawer effect'.
Oh man, I'd have to dig that up. I may be up for that but honestly not today. Again, I don't think the file drawer effect is a big deal here and it is really a distraction from more serious issues.

With all due respect this whole "they just won't accept the evidence for psi" line is not an argument. There are very legitimate issues present in the work based on the best we know about research practices. Lines like that have the effect of discouraging people from looking at them. It's peer pressure: trying to make someone feel closed minded by looking closely at the methodology. When you look at the work that scientists like the cochrane group, John Iaonnidis and others are doing on meta-research I think it is worth taking a second look rather than going back to old debates.

4) Pre-registering would be an improvement - if all studies were pre-reigstered and positive results continue to be found, will you change your mind?
I'd like to think that if the studies were done with low risk of bias using methodologies that allowed a confident result that it would convince me (there's more going on than just file drawer though, but I assume we're simplifying for the sake of discussion). I don't think I have much control over my beliefs: the best I can do is force myself to review the information and then reflect on what my beliefs are!

5) I think Parapsychology has gone way past the 'preliminary studies' stage in times of proof, in terms of the mechanism you would be correct though.
Talking in generalities isn't that helpful. Maybe pick an area and we can look at it more closely. Or pick a paper that we can go through.

6) Low power/sample size isn't a 'red flag' it just makes it harder to find an effect if the effect isn't that large. Increase the power and sample size and replication rates and potentially effect size will increase. Internal selective reporting is a potential issue in all of Science, I think a lot of the studies done have pretty effectively controlled for what at least appears to be psi, especially when you read some of the sessions in the Ganzfeld for example.
I'm sorry but underpowered studies are a huge risk of bias. I posted a study awhile back that compared metaanlyses with underpowered vs. sufficiently powered studies, IIRC they found that even a single sufficiently powered study was far more reliable than an entire slew of underpowered studies (I can dig it up later if you like). As I understand it, it is even more of an issue when dealing with small effect sizes.

This is what I mean by preliminary. Look at the psychology crisis, you had an entire field of research devoted to an effect with hundreds of studies dominated by small, underpowered studies than we now know just wasn't there. Take a close look at the replication crisis there and tell me that you don't see many of the same issues. This isn't surprising since parapsychology has taken it's lead from psychology.

7) I agree that we can and should continue to improve Parapsychology research, however according to a normal Scientific standard psi has been established, the only reason the debate is still going on is because of what we're studying, if it was anything else it would have been accepted long ago.
What normal scientific standard? Are you talking about Wiseman's quote? He's a psychologist! The entire field is in crisis! Read J.E. Kennedy's paper on it. The question is based on just looking at the methodology what risk of bias do we see? Have you taken a look at the cochrane handbook on meta-analysis? It's in the field of medicine so needs some adjustment to apply but read the sections on bias in particular.

8) These issues do exist across Science, however due to the higher scrutiny Parapsychology research is usually of a better quality then mainstream Psychology research, matches or beats it in terms of replication rates, and with much less resources, funding and manpower.
With all due respect that's a talking point. I agree parapsychology has done a lot with a little. It is plagued by under-funding. But that's also why the studies are largely preliminary: it takes resources to construct the higher quality studies. That's why the sample sizes are often so small: they don't have the budget for larger studies!

And forget about this whole comparison thing. Studies must be evaluated on their own merits, just because other fields of research have similar risks of bias doesn't make it more reliable! The analysis has to be done individually on every study!

9) True, but as I'm sure you're aware better methodology and better experimental design in Parapsychology hasn't lead to a decline in results, and has usually not affected the results or lead to a slight incline.
Again, with all due respect, this is vague, talking point stuff. I hear it repeated all over the place but when I sit down and look at individual studies I see all sorts of stuff (and I'm just a lay person!). Maybe pick a study you particularly like and we can go through it,
 
#14
Yeah... You will not convince him. All of this has been said to him in all sorts of manners possible and he always resets back to the "methodological issues" stance. Don't bother, he is looking right trough you and trying to entice a couple of lurkers with that wall of text.
Heh. Yes, this stuff has been said before, and I've responded with specific issues and linked to my sources. For some reason the discussion then seems to die only to be resurrected from scratch again...

As for the wall of text, some issues take more discussion than being reduced to soundbites.
 
#15
I was under the impression general aquinos paper Mindwar which was accidentally declassified was strong evidence of Stargate being still alive.
 
#16
I think we have a different opinion of that paper but again, I think the whole issue of file drawer is a red herring anyway. Do you agree that going forward only preregistered studies should be used? if so then we're not that far apart! :)



I think you are missing my point about the significance of the 9! The point is that a couple handful of studies were enough to turn the results from one to the other. Look at the article that malf posted: speculation is fine but the best way to test whether it's a problem or not is to perform the experiments with and without the risk present and see if there are changes! This is a much more sound way to proceed.

But again, I'm not too concerned with file drawer anyway! Ganzfeld has bigger risks of bias that need to be addressed going forward! The fact that virtually the entire database is underpowered for the results is a big one. Many of the studies included are incredibly small with weird, non-declared in advanced, numbers of trials.



Oh man, I'd have to dig that up. I may be up for that but honestly not today. Again, I don't think the file drawer effect is a big deal here and it is really a distraction from more serious issues.

With all due respect this whole "they just won't accept the evidence for psi" line is not an argument. There are very legitimate issues present in the work based on the best we know about research practices. Lines like that have the effect of discouraging people from looking at them. It's peer pressure: trying to make someone feel closed minded by looking closely at the methodology. When you look at the work that scientists like the cochrane group, John Iaonnidis and others are doing on meta-research I think it is worth taking a second look rather than going back to old debates.



I'd like to think that if the studies were done with low risk of bias using methodologies that allowed a confident result that it would convince me (there's more going on than just file drawer though, but I assume we're simplifying for the sake of discussion). I don't think I have much control over my beliefs: the best I can do is force myself to review the information and then reflect on what my beliefs are!



Talking in generalities isn't that helpful. Maybe pick an area and we can look at it more closely. Or pick a paper that we can go through.



I'm sorry but underpowered studies are a huge risk of bias. I posted a study awhile back that compared metaanlyses with underpowered vs. sufficiently powered studies, IIRC they found that even a single sufficiently powered study was far more reliable than an entire slew of underpowered studies (I can dig it up later if you like). As I understand it, it is even more of an issue when dealing with small effect sizes.

This is what I mean by preliminary. Look at the psychology crisis, you had an entire field of research devoted to an effect with hundreds of studies dominated by small, underpowered studies than we now know just wasn't there. Take a close look at the replication crisis there and tell me that you don't see many of the same issues. This isn't surprising since parapsychology has taken it's lead from psychology.



What normal scientific standard? Are you talking about Wiseman's quote? He's a psychologist! The entire field is in crisis! Read J.E. Kennedy's paper on it. The question is based on just looking at the methodology what risk of bias do we see? Have you taken a look at the cochrane handbook on meta-analysis? It's in the field of medicine so needs some adjustment to apply but read the sections on bias in particular.



With all due respect that's a talking point. I agree parapsychology has done a lot with a little. It is plagued by under-funding. But that's also why the studies are largely preliminary: it takes resources to construct the higher quality studies. That's why the sample sizes are often so small: they don't have the budget for larger studies!

And forget about this whole comparison thing. Studies must be evaluated on their own merits, just because other fields of research have similar risks of bias doesn't make it more reliable! The analysis has to be done individually on every study!



Again, with all due respect, this is vague, talking point stuff. I hear it repeated all over the place but when I sit down and look at individual studies I see all sorts of stuff (and I'm just a lay person!). Maybe pick a study you particularly like and we can go through it,
1) I'm glad we agree the file drawer is a red herring (it's also just not something informed 'Skeptics' use anymore, that's for a good reason), but yes, pre-registering from now on would further improve the already strong evidence Parapsychology has amassed.

2) I didn't miss your point honestly, you're using an example of how a meta analysis went from null and non significant to significant. That's because the evidence for psi is already positive and significant, so turning a flawed meta-analysis to significant doesn't take many studies, but to turn it from positive and significant to the null takes a boat load of studies!

3) It being underpowered isn't a big issue to to the number of studies and trials done, it just makes it harder for individual studies to achieve significance. Larger studies with pre declared number of trials would be better - these issues don't undermine the Ganzfeld research in a meaningful way though, solving these issues would just improve the evidence is all.

4) Yes we agree file drawer isn't a big issue, but you keep talking about pre-registering which made it seem like you thought it was.

5) The only reason we are going back to old debates is because improvements have been continually made to psi research, the effect hasn't gone away, but you for example, want even more improvements before we can conclude psi has been reliably demonstrated! Why can't we say it has, but further improvements would be a good thing anyway?

6) So you probably wouldn't change your mind then? ;). The evidence is there, apply this level of skepticism to yourself! You probably assume I was born a believer, I was not, tons of research is what convinced me.

7) Yes higher powered studies would be better, but the amount of replications done in say the Ganzfeld makes your concerns untenable.

8) The same issues don't affect Parapsychology to the same degree due to the amount of critical and at times libellous scrutiny the field has received. The Ganzfeld in terms of replication exceeds that of many areas of Neuroscience for example.

9) You can use Wiseman's quote, or Utt's conclusions (shes a statistician). I simply don't think the issues in mainstream Psychology apply to Parapsychology anywhere near as much. They have been doing higher quality research, publishing null results since the 1970's etc, the field can be improved but it doesn't need to keep chasing a never ending goalpost to satisfy people that won't address their own biases!

10) Again I just don't agree that Parapsychology studies, at least in terms or proof orientated studies are preliminary.

11) Individual studies don't tell us much of anything, it's the accumulated evidence that is telling. If you want to go through a paper, Johann and Maaneli's is the best one and it addresses many of the issues that you've brought up here.
 
#17
1) I'm glad we agree the file drawer is a red herring (it's also just not something informed 'Skeptics' use anymore, that's for a good reason), but yes, pre-registering from now on would further improve the already strong evidence Parapsychology has amassed.

2) I didn't miss your point honestly, you're using an example of how a meta analysis went from null and non significant to significant. That's because the evidence for psi is already positive and significant, so turning a flawed meta-analysis to significant doesn't take many studies, but to turn it from positive and significant to the null takes a boat load of studies!
I guess we'll have to agree to disagree on this one.

3) It being underpowered isn't a big issue to to the number of studies and trials done, it just makes it harder for individual studies to achieve significance. Larger studies with pre declared number of trials would be better - these issues don't undermine the Ganzfeld research in a meaningful way though, solving these issues would just improve the evidence is all.
Well, I've based my understanding of this based on research I've read on the issue. From my understanding the experts in research methodology disagree with you. I'm not sure upon what you are basing this view.

4) Yes we agree file drawer isn't a big issue, but you keep talking about pre-registering which made it seem like you thought it was.
I think I've said my piece on this.

5) The only reason we are going back to old debates is because improvements have been continually made to psi research, the effect hasn't gone away, but you for example, want even more improvements before we can conclude psi has been reliably demonstrated! Why can't we say it has, but further improvements would be a good thing anyway?
You can say what you like, but based on my understanding of best research practices there are many risks of bias that are contained in the work that would require us to limit our view of how reliable we see the results. Further research is not just a good thing, but necessary in order to improve the confidence level.

6) So you probably wouldn't change your mind then? ;). The evidence is there, apply this level of skepticism to yourself! You probably assume I was born a believer, I was not, tons of research is what convinced me.
I'm not sure you read what I wrote. I'm also not sure why you are assuming I'm not well versed in the research. I'm pretty well read in terms of parapsychology. I've also done a lot of reading on research methodology generally.

7) Yes higher powered studies would be better, but the amount of replications done in say the Ganzfeld makes your concerns untenable.
You say this but like I said, other research on this matter indicates otherwise and we're seeing in pscyhology a bit of a practical example of this. You haven't mentioned why you think what has been such a problem for psychology would not be in parapsychology?

8) The same issues don't affect Parapsychology to the same degree due to the amount of critical and at times libellous scrutiny the field has received. The Ganzfeld in terms of replication exceeds that of many areas of Neuroscience for example.
So the fact that neuroscience also has high risks of reliability somehow makes these studies more reliable? You are doing yourself a disservice by keeping things so general and comparative. Look at individual studies. Analyze the methodology used, draw conclusions about that study alone. You have to proceed study by study.

9) You can use Wiseman's quote, or Utt's conclusions (shes a statistician). I simply don't think the issues in mainstream Psychology apply to Parapsychology anywhere near as much. They have been doing higher quality research, publishing null results since the 1970's etc, the field can be improved but it doesn't need to keep chasing a never ending goalpost to satisfy people that won't address their own biases!
I've referred to the research that informs my opinion here. You haven't really asked me about it. I'm not sure what your opinion here is based on. Honestly, it sounds like the talking points we hear over and over, but it doesn't really get into the meat. Like I said, why not pick a paper and we can go through it more closely (keeping in mind I'm not an expert, I'm a lay person who has worked through this stuff over many years).

10) Again I just don't agree that Parapsychology studies, at least in terms or proof orientated studies are preliminary.
I tried to elaborate on what I meant by preliminary, I'm not sure what you are disagreeing with in particular with what I wrote. Put aside the word "preliminary" if that's what's causing the issue. The studies have high risks that affect their reliability. Again, I suggest you read the Cochrane Handbook chapter on bias (I think its chapter 8).

11) Individual studies don't tell us much of anything, it's the accumulated evidence that is telling. If you want to go through a paper, Johann and Maaneli's is the best one and it addresses many of the issues that you've brought up here.
Heh, I was thinking more about actual studies but we can go through J and M's paper if you wish. It's been awhile since I read it but I don't recall finding it as good as you present it.

As for the accumulated evidence - that's the point - for the accumulated evidence to be reliable it must be based on solid foundations. A meta-analysis based on papers that have high risks of bias in them is not going to suddenly become reliable. That is what research has shown in other fields (again, you haven't really mentioned the papers I referenced, or asked me to dig them up, I'm not sure if you consider them relevant or not.)

If you want to start a thread on the J/M paper, I'll join in. I'll have to brush up on it again.
 
#18
Here's a paper by John Ioannidis discussing the risks of low powered studies:

Power failure: why small sample size undermines the reliability of neuroscience.

Here is the abstract:

A study with low statistical power has a reduced chance of detecting a true effect, but it is less well appreciated that low power also reduces the likelihood that a statistically significant result reflects a true effect. Here, we show that the average statistical power of studies in the neurosciences is very low. The consequences of this include overestimates of effect size and low reproducibility of results. There are also ethical dimensions to this problem, as unreliable research is inefficient and wasteful. Improving reproducibility in neuroscience is a key priority and requires attention to well-established but often ignored methodological principles.​
 
#19
9) True, but as I'm sure you're aware better methodology and better experimental design in Parapsychology hasn't lead to a decline in results, and has usually not affected the results or lead to a slight incline.
From the linked article:
According to Dr Watt, many have questioned the legitimacy and necessity of parapsychology research. However, she commented, “History shows that the challenges of conducting parapsychological research can drive methodological advances that have wider scientific benefits.”
I personally think that future research will address the problems and that progress can be made. Immense help could come from a working model for mind. Having a process map clarifies how to test our environments for what is the range of information processing effects.
 
#20
I just want to comment that is not new that p < 0.05 is meaningless in some areas. In fact, you cannot access much information just using one number. This is not a discovery made recently, in fact, any good statistician knows that. The main problem is that there has been an exponential increase of scientific papers [1] and for me it's clear that many of these authors have only studied "statistics 101". The worst part of this is not that people are using wrong statistics (which happens) but that they are interpreting the statistics wrongly (see [2]). The 'crisis' is just people realizing that.

There are many reasons that I think that parapsychology doesn't suffer from that. The community of parapsychologists is really small, they don't have much grants and they don't gain prestige from doing that. As a physicist, I don't even know the people that I cite in my papers. However, I started reading parapsychology papers about 1 year ago and I feel that I already know most of the researchers in the area.

I could go on and cite the many references about why I think that Ganzfeld and Remote Viewing evidence are as solid as any other scientific discovery. However, a lot of people already did an excellent job doing that ([3]-[5]).

A recent paper by Dick Bierman (a 'psi-proponent') published last year in PloS One [6] (a good mainstream journal) analyzed the Ganzfeld database taking in consideration possible fraud mechanisms [6]. The paper is VERY technical. However, the result can be summarized by the following quote by Dick Bierman [7]:

"In a recent project we attempted to see if we could fit the meta-analytic ganzfeld data to a model that assumed questionable research practices. We could not get a good fit without assuming some psi in the database. But the psi effect size to obtain the best fit was very low and we concluded that, if this model was correct, the power of ganzfeld studies was way too low and that the only solution for this power problem was to use selected populations like musicians."

However, there's no evidence of any fraud in Ganzfeld and the papers ([3]-[5]) clearly shows the consistency of the Ganzfeld results. However, even if there was a lot of fraud involved, there's still good enough evidence for Ganzfeld [6].

[1]http://blogs.discovermagazine.com/neuroskeptic/2012/09/30/science-growing-too-fast/#.WJ1Gi_krJPY
[2]http://www.nature.com/news/scientific-method-statistical-errors-1.14700
[3]http://www.ics.uci.edu/~jutts/JSE1999.pdf
[4]http://emmind.net/openpapers_repos/...d-choice_Remote_Viewing_and_Dream_Studies.pdf
[5]http://www.patriziotressoldi.it/cmssimpled/uploads/Meta_Baptista14.pdf
[6]http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0153049
[7]https://carlossalvarado.wordpress.com/2015/12/02/people-in-parapsychology-xxvi-dick-bierman/
 
Top