I don't have time to slice and quote haha so each number is responding to each point of yours in order!
1) The methods are generally pretty group, and the majority of different calculations done by different people, with different methods, have found the amount of unpublished negative results to bring the overall results back to null. Johann and Maaneli's paper is best on this subject, but effectively, there is nowhere near the amount of people doing this research to make the 'file drawer' effect even remotely tenable.
I think we have a different opinion of that paper but again, I think the whole issue of file drawer is a red herring anyway. Do you agree that going forward only preregistered studies should be used? if so then we're not that far apart! :)
2) Exactly, the only meta-analysis of nine that didn't find a positive effect in the Ganzfeld was found to be flawed (the one you mentioned), and the flaws were in the negative but only took a few studies to be removed to go back to significance. Whereas to take the current literature from positive to null you need a huge amount of unpublished negative studies. In Wiseman's meta analysis I think he included a few process orientated studies instead of proof orientated, which you shouldn't do if you want to find out whether there is an effect or not.
I think you are missing my point about the significance of the 9! The point is that a couple handful of studies were enough to turn the results from one to the other. Look at the article that malf posted: speculation is fine but the best way to test whether it's a problem or not is to perform the experiments with and without the risk present and see if there are changes! This is a much more sound way to proceed.
But again, I'm not too concerned with file drawer anyway! Ganzfeld has bigger risks of bias that need to be addressed going forward! The fact that virtually the entire database is underpowered for the results is a big one. Many of the studies included are incredibly small with weird, non-declared in advanced, numbers of trials.
3) What models project a smaller amount of unpublished results to return it to the null? Can you point me to them? The only reason there old studies keep being reanalysed and debating is because some people won't accept the evidence for psi. I mean Watt looked into the unpublished studies Parapsychologists had and a significant amount of them were positive, so the unpublished studies that were found were not supportive of the 'filedrawer effect'.
Oh man, I'd have to dig that up. I may be up for that but honestly not today. Again, I don't think the file drawer effect is a big deal here and it is really a distraction from more serious issues.
With all due respect this whole "they just won't accept the evidence for psi" line is not an argument. There are very legitimate issues present in the work based on the best we know about research practices. Lines like that have the effect of discouraging people from looking at them. It's peer pressure: trying to make someone feel closed minded by looking closely at the methodology. When you look at the work that scientists like the cochrane group, John Iaonnidis and others are doing on meta-research I think it is worth taking a second look rather than going back to old debates.
4) Pre-registering would be an improvement - if all studies were pre-reigstered and positive results continue to be found, will you change your mind?
I'd like to think that if the studies were done with low risk of bias using methodologies that allowed a confident result that it would convince me (there's more going on than just file drawer though, but I assume we're simplifying for the sake of discussion). I don't think I have much control over my beliefs: the best I can do is force myself to review the information and then reflect on what my beliefs are!
5) I think Parapsychology has gone way past the 'preliminary studies' stage in times of proof, in terms of the mechanism you would be correct though.
Talking in generalities isn't that helpful. Maybe pick an area and we can look at it more closely. Or pick a paper that we can go through.
6) Low power/sample size isn't a 'red flag' it just makes it harder to find an effect if the effect isn't that large. Increase the power and sample size and replication rates and potentially effect size will increase. Internal selective reporting is a potential issue in all of Science, I think a lot of the studies done have pretty effectively controlled for what at least appears to be psi, especially when you read some of the sessions in the Ganzfeld for example.
I'm sorry but underpowered studies are a huge risk of bias. I posted a study awhile back that compared metaanlyses with underpowered vs. sufficiently powered studies, IIRC they found that even a single sufficiently powered study was far more reliable than an entire slew of underpowered studies (I can dig it up later if you like). As I understand it, it is even more of an issue when dealing with small effect sizes.
This is what I mean by preliminary. Look at the psychology crisis, you had an entire field of research devoted to an effect with hundreds of studies dominated by small, underpowered studies than we now know just wasn't there. Take a close look at the replication crisis there and tell me that you don't see many of the same issues. This isn't surprising since parapsychology has taken it's lead from psychology.
7) I agree that we can and should continue to improve Parapsychology research, however according to a normal Scientific standard psi has been established, the only reason the debate is still going on is because of what we're studying, if it was anything else it would have been accepted long ago.
What normal scientific standard? Are you talking about Wiseman's quote? He's a psychologist! The entire field is in crisis! Read J.E. Kennedy's paper on it. The question is based on just looking at the methodology what risk of bias do we see? Have you taken a look at the cochrane handbook on meta-analysis? It's in the field of medicine so needs some adjustment to apply but read the sections on bias in particular.
8) These issues do exist across Science, however due to the higher scrutiny Parapsychology research is usually of a better quality then mainstream Psychology research, matches or beats it in terms of replication rates, and with much less resources, funding and manpower.
With all due respect that's a talking point. I agree parapsychology has done a lot with a little. It is plagued by under-funding. But that's also why the studies are largely preliminary: it takes resources to construct the higher quality studies. That's why the sample sizes are often so small: they don't have the budget for larger studies!
And forget about this whole comparison thing. Studies must be evaluated on their own merits, just because other fields of research have similar risks of bias doesn't make it more reliable! The analysis has to be done individually on every study!
9) True, but as I'm sure you're aware better methodology and better experimental design in Parapsychology hasn't lead to a decline in results, and has usually not affected the results or lead to a slight incline.
Again, with all due respect, this is vague, talking point stuff. I hear it repeated all over the place but when I sit down and look at individual studies I see all sorts of stuff (and I'm just a lay person!). Maybe pick a study you particularly like and we can go through it,