Science and philosophy gave him something he never thought he’d find… respect for religion |312|

Scientists (such as at Cochrane, and METRICS) have made a lot of progress in terms of the development of the field of meta-analysis, which is allowing us to take a broader look at what methods are being used, and which methods produce the best results. In that process we're going to learn a lot as well about what produces less reliable results. This knowledge will be used to guide future research.

Arouet, thanks for the references to Cochrane and METRICS. They are doing science-healing.

Meta-research is an evolving scientific discipline that aims to evaluate and improve research practices. It includes thematic areas of methods, reporting, reproducibility, evaluation, and incentives (how to do, report, verify, correct, and reward science). Much work is already done in this growing field, but efforts to-date are fragmented. We provide a map of ongoing efforts and discuss plans for connecting the multiple meta-research efforts across science worldwide. - METRICS
 
I think I'm just confused by the idea of unrealistic expectations. I think the drug companies upset about lack of replicable data had pretty reasonable expectations.

Are you sure about that? When you look at the papers that Sheldrake cites in that article you see some different themes. The first article notes that many researchers in pharmaceutical companies are well aware of the the need to check that the initial studies hold up. The article describes the in-house attempts of the companies to conduct their own replications. The frustration that they express is that dissappontingly few are successfully replicated. They do they precisely to avoid wasting too much money on an unproductive line or research. The article notes that this is pretty well known.

The second link describes research where the investigators did not bother to validate the initial studies. The frustration of these teams was much worse: because they ended up wasting a lot of time and money. The article suggests not doing that! (among other advice).

Note in that second article the authors say that this shouldn't be interpreted as the system being fundmantally broken (I'm paraphrasing). They note there is a lot of really good research out there. The rest of the article is geared, similarly to the iaonidis paper, to providing advice on best practices. Note: the authors took pains to consult some of the original authors and noted they were perfectly competent and serious researchers.

Same with people who would've thought that a major theory of psychology was built on some kind of solid foundation? From the article Chuck posted:

Another good example of the kinds of things I've long been posting about. When you read the paper that this skate article is referring to you see that it's not really describing an entire field based on a false premise that no one tried to replicate. Rather, that entire field was basically replications in one form or another! The problem, however, is that what you had were a lot of variations on a theme but dominated by small scale, underpowered studies. So even when meta analysis was done it, being based on small, underpowered studies (along with some other issues involving topics we've often discussed such as selection biases, etc.) you had meta results that likely showed effects that weren't there. (Note, this is in line with a study I posted awhile back that confirmed that one or two fully powered studies are more reliable than entire meta analyses filled with underpowered studies.

When these guys did their big, fully powered, study, the effect all but disappeared. Note, the authors suggest potential issues with their paper as well, and suggest that they may be off as well but that the issue is worth pursuing.

The authors aren't chagrined about the system as a whole either and provide advice.

What you are seeing, I suggest, is not evidence of fundmental flaws but the emergence of a better understanding of how to produce reliable results.

These findings are important and should play a big role in the future allocation of funds and publishing decisions. But I'm not sure we can blame those who came before, or blast them as boobs. It takes time to figure this stuff out. These results are not always intuitive. The research had to be done first and meta research was not always easy. It's the advent of the Internet id guess that had really allowed the field to burst in the last decade or so. The capacity for this scale of study would have been extremely difficult earlier.

Note that there are a lot of parallels we can see in the study of the history of these ego experiments to the history of parapsychology. Its worthwhile reading closely for many on this forum, and I think it suggests certain questions to ask in this field as well.

Gotta say, it's nice that others are starting to draw attention to these studies on this forum. I've been trying to generate discussion on them for years! So thank you!


You can find all sorts of similar cases I'm sure. Remember, we both agree there are flaws and abuses. But we shouldn't evaluate a system as a whole based solely on its failures. There are many other factors. And there is no system that will be failure free. Evaluating the system requires a much broader view.

It's the "what's next" that's disturbing. There are already a lot of revealed issues, but without some major efforts on replicating the various findings in every field it's hard to know how far the rot extends?

If you read those papers I think you'll see some pretty good suggestions for ways to move forward.

The rest of your post seems to be more of the us vs them skeptic proponent stuff - which tend to be discussion killers this had been a great discussion so far and it would be a shame to kill it so I'm not going to address them.[/QUOTE]
 
Do you agree with David, that Wiseman changed the protocol of Sheldrake's original experiment?
Sorry, I thought you were familiar with the paper. Briefly, he made adjustments to how the data was analysed. And yes, he explained in detail in the paper. What was overstated (not in the paper itself but in comments that were made subsequently, is that WIseman's experiment should be considered to have "debunked" Sheldrakes.

I've been re-familiarizing myself with this material and it appears my memory was a bit off. From what I can determine, it is misleading to suggest that Wiseman altered Sheldrake's protocols - if anything it was the opposite. But really they were working simultaneously.

While Sheldrake had started working with Jaytee before Wiseman, he didn't publish his protocol or calculations until after Wiseman. At the same time, Sheldrake reanalyzed Wiseman's data to follow his protocol. Wiseman appears to have been in regular consulation with Pam Smart throughout his experiment, and who appears to have impacting directly on his protocol (or at least, Wiseman appears to have developed his protocols based on discussions with PS. While Wiseman states in his paper that he consulted both RS and PS, the substantive comments involve the discussions with PS only. Sheldrake wrote in one of his papers that he was not consulted by Wiseman - seems that if there was consultation with RS it was probably not extensive.

That said, as it is presented, Wiseman got involved spurred by an appeal by Sheldrake that people should
go out and test animals in his Seven Experiments that could change the world book. I understand that Sheldrake described a possible protocol in that book, but I don't have it so I can't say if that might accurately be described as a protocol that Wiseman altered. But I'm not sure if that counts.

In any event, as with so much of these discussions, the situation is more nuanced that some suggest.

David, if you think I've misrepresented anything, or have additional information, please let me know. I can post my sources later if anyone is interested.
 
The point is, in order to make things better we have to properly understand what's going on. Doing so helps us stop seeing each other as enemies, and helps us realize that competition of ideas is a good thing. We want a diversity of opinion in society. We want people to have to compete for their ideas to prevail. It's not always going to be smooth sailing, and there will be winners and losers, but overall the system seems to work.

This is how we repair the bad behaviour. It can help us stop thinking of the other as evil idiots, stop obsessing over motives, and actually focus on the substance of these matters.

Thank you for speaking up on this. I could not agree with you more.
 
Are you sure about that? When you look at the papers that Sheldrake cites in that article you see some different themes. The first article notes that many researchers in pharmaceutical companies are well aware of the the need to check that the initial studies hold up. The article describes the in-house attempts of the companies to conduct their own replications. The frustration that they express is that dissappontingly few are successfully replicated. They do they precisely to avoid wasting too much money on an unproductive line or research.

The article notes that this is pretty well known.

Looking at the first paper:

Believe it or not: how much can we rely on published data on potential drug targets?

To mitigate some of the risks of such investments ultimately being wasted, most pharmaceutical companies run in-house target validation programmes. However, validation projects that were started in our company based on exciting published data have often resulted in disillusionment when key data could not be reproduced. Talking to scientists, both in academia and in industry, there seems to be a general impression that many results that are published are hard to reproduce. However, there is an imbalance between this apparently widespread impression and its public recognition (for example, see Refs 2, 3), and the surprisingly few scientific publications dealing with this topic. Indeed, to our knowledge, so far there has been no published in-depth, systematic analysis that compares reproduced results with published results for wet-lab experiments related to target identification and validation.

This seems to indicate what I noted - that it's hard to gauge the extent of the failures in science-as-practiced without actually going back and re-checking results?

There's a 2012 that discusses this issue with respect to cancer treatments:

During a decade as head of global cancer research at Amgen, C. Glenn Begley identified 53 "landmark" publications -- papers in top journals, from reputable labs -- for his team to reproduce. Begley sought to double-check the findings before trying to build on them for drug development.

Result: 47 of the 53 could not be replicated. He described his findings in a commentary piece published on Wednesday in the journal Nature.

The failure to win "the war on cancer" has been blamed on many factors, from the use of mouse models that are irrelevant to human cancers to risk-averse funding agencies. But recently a new culprit has emerged: too many basic scientific discoveries, done in animals or cells growing in lab dishes and meant to show the way to a new drug, are wrong.

The public doesn't even get to know which studies failed due to an NDA:

Bayer and Amgen found that the prestige of a journal was no guarantee a paper would be solid. "The scientific community assumes that the claims in a preclinical study can be taken at face value," Begley and Lee Ellis of MD Anderson Cancer Center wrote in Nature. It assumes, too, that "the main message of the paper can be relied on ... Unfortunately, this is not always the case."

When the Amgen replication team of about 100 scientists could not confirm reported results, they contacted the authors. Those who cooperated discussed what might account for the inability of Amgen to confirm the results. Some let Amgen borrow antibodies and other materials used in the original study or even repeat experiments under the original authors' direction.

Some authors required the Amgen scientists sign a confidentiality agreement barring them from disclosing data at odds with the original findings. "The world will never know" which 47 studies -- many of them highly cited -- are apparently wrong, Begley said.

Now the problem may or may not be extensive, though the article notes this issue isn't related to just cancer research:

The problem goes beyond cancer.

On Tuesday, a committee of the National Academy of Sciences heard testimony that the number of scientific papers that had to be retracted increased more than tenfold over the last decade; the number of journal articles published rose only 44 percent.

Ferric Fang of the University of Washington, speaking to the panel, said he blamed a hypercompetitive academic environment that fosters poor science and even fraud, as too many researchers compete for diminishing funding.

"The surest ticket to getting a grant or job is getting published in a high-profile journal," said Fang. "This is an unhealthy belief that can lead a scientist to engage in sensationalism and sometimes even dishonest behavior."

The academic reward system discourages efforts to ensure a finding was not a fluke. Nor is there an incentive to verify someone else's discovery. As recently as the late 1990s, most potential cancer-drug targets were backed by 100 to 200 publications. Now each may have fewer than half a dozen.

"If you can write it up and get it published you're not even thinking of reproducibility," said Ken Kaitin, director of the Tufts Center for the Study of Drug Development. "You make an observation and move on. There is no incentive to find out it was wrong."

It seems to me this is a cause for concern, and requires a massive review?

The second link describes research where the investigators did not bother to validate the initial studies. The frustration of these teams was much worse: because they ended up wasting a lot of time and money. The article suggests not doing that! (among other advice).

Note in that second article the authors say that this shouldn't be interpreted as the system being fundmantally broken (I'm paraphrasing). They note there is a lot of really good research out there. The rest of the article is geared, similarly to the iaonidis paper, to providing advice on best practices. Note: the authors took pains to consult some of the original authors and noted they were perfectly competent and serious researchers.

To me it seems there's a difference between the system - by which I take to mean the methodology - is fundamentally broken and whether there are a lot of biased/false results out there that were allowed to go through.

Looking at the second link:

Drug development: Raise standards for preclinical cancer research


Although hundreds of thousands of research papers are published annually, too few clinical successes have been produced given the public investment of significant financial resources. We need a system that will facilitate a transparent discovery process that frequently and consistently leads to significant patient benefit.

This at the least indicates a problem, though I'll say again I don't know how one determines how bad a problem is without a massive public review.

I'd suggest a website where the public can see percentage of replication failures in each field as well as free referencing of the papers.

Another good example of the kinds of things I've long been posting about. When you read the paper that this skate article is referring to you see that it's not really describing an entire field based on a false premise that no one tried to replicate. Rather, that entire field was basically replications in one form or another! The problem, however, is that what you had were a lot of variations on a theme but dominated by small scale, underpowered studies. So even when meta analysis was done it, being based on small, underpowered studies (along with some other issues involving topics we've often discussed such as selection biases, etc.) you had meta results that likely showed effects that weren't there. (Note, this is in line with a study I posted awhile back that confirmed that one or two fully powered studies are more reliable than entire meta analyses filled with underpowered studies.

When these guys did their big, fully powered, study, the effect all but disappeared. Note, the authors suggest potential issues with their paper as well, and suggest that they may be off as well but that the issue is worth pursuing.

The Slate Article:

Everything is Crumbling


“At some point we have to start over and say, This is Year One,” says Inzlicht, referring not just to the sum total of ego depletion research, but to how he sometimes feels about the entire field of social psychology.*

All the old methods are in doubt. Even meta-analyses, which once were thought to yield a gold standard for evaluating bodies of research now seem somewhat worthless. “Meta-analyses are fucked,” Inzlicht warned me. If you analyze 200 lousy studies, you’ll get a lousy answer in the end. It’s garbage in, garbage out.

This seems to support my idea of a massive re-examination?

What you are seeing, I suggest, is not evidence of fundmental flaws but the emergence of a better understanding of how to produce reliable results.

Do you mean fundamental flaws in the research design and methodology or flaws in science-as-practiced? Again, how does one know this without a massive review of studies in tandem with replication attempts?

Gotta say, it's nice that others are starting to draw attention to these studies on this forum. I've been trying to generate discussion on them for years! So thank you!

You're welcome. But I'd reiterate that without mathematical training there's little discussion to be had beyond general trends at best.

I've never really understood the idea that a layperson forum is going to advance much understanding which is why I suspect most people who want to understand the data from parapsychology do so in private but come here to discuss implications. After all one would have to begin by reading a statistics book or two cover-to-cover.

You can find all sorts of similar cases I'm sure. Remember, we both agree there are flaws and abuses. But we shouldn't evaluate a system as a whole based solely on its failures. There are many other factors. And there is no system that will be failure free. Evaluating the system requires a much broader view.

The question is how do we evaluate a system without knowing the extent to which it has failed.

The rest of your post seems to be more of the us vs them skeptic proponent stuff - which tend to be discussion killers this had been a great discussion so far and it would be a shame to kill it so I'm not going to address them

Actually without examining bias in researches and academia how does one hope to achieve a genuine science?

Without acknowledging the failures of the self-appointed guardians of skepticism how does one hope to make progress?

Even taking out consideration of parapsychology there's a possibility that bias among scientists prevents progress:

Does Science Advance One Funeral at a Time?

We study the extent to which eminent scientists shape the vitality of their fields by examining entry rates into the fields of 452 academic life scientists who pass away while at the peak of their scientific abilities. Key to our analyses is a novel way to delineate boundaries around scientific fields by appealing solely to intellectual linkages between scientists and their publications, rather than collaboration or co-citation patterns. Consistent with previous research, the flow of articles by collaborators into affected fields decreases precipitously after the death of a star scientist (relative to control fields). In contrast, we find that the flow of articles by non-collaborators increases by 8% on average. These additional contributions are disproportionately likely to be highly cited. They are also more likely to be authored by scientists who were not previously active in the deceased superstar’s field. Overall, these results suggest that outsiders are reluctant to challenge leadership within a field when the star is alive and that a number of barriers may constrain entry even after she is gone. Intellectual, social, and resource barriers all impede entry, with outsiders only entering subfields that offer a less hostile landscape for the support and acceptance of “foreign” ideas.

There's a lot more I could post about corruption and materialist bias, but if you're not interested that's fine.
 
I've been re-familiarizing myself with this material and it appears my memory was a bit off. From what I can determine, it is misleading to suggest that Wiseman altered Sheldrake's protocols - if anything it was the opposite. But really they were working simultaneously.

While Sheldrake had started working with Jaytee before Wiseman, he didn't publish his protocol or calculations until after Wiseman. At the same time, Sheldrake reanalyzed Wiseman's data to follow his protocol. Wiseman appears to have been in regular consulation with Pam Smart throughout his experiment, and who appears to have impacting directly on his protocol (or at least, Wiseman appears to have developed his protocols based on discussions with PS. While Wiseman states in his paper that he consulted both RS and PS, the substantive comments involve the discussions with PS only. Sheldrake wrote in one of his papers that he was not consulted by Wiseman - seems that if there was consultation with RS it was probably not extensive.

That said, as it is presented, Wiseman got involved spurred by an appeal by Sheldrake that people should
go out and test animals in his Seven Experiments that could change the world book. I understand that Sheldrake described a possible protocol in that book, but I don't have it so I can't say if that might accurately be described as a protocol that Wiseman altered. But I'm not sure if that counts.

In any event, as with so much of these discussions, the situation is more nuanced that some suggest.

David, if you think I've misrepresented anything, or have additional information, please let me know. I can post my sources later if anyone is interested.

You could just use a threshold triggered criteria... so that if Jaytee's time at the window (without obvious cause), exceeded a 1/3 of any rolling 10 minute period, you could say with some certainly that Pam had either... already set off, or, predict she would set off with 10 minutes.

That criteria sort of appeals to me in this case, as it's possibly the sort of behavioral response that might be observed due to a stochastic resonance like mechanism.
 
You could just use a threshold triggered criteria... so that if Jaytee's time at the window (without obvious cause), exceeded a 1/3 of any rolling 10 minute period, you could say with some certainly that Pam had either... already set off, or, predict she would set off with 10 minutes.

That criteria sort of appeals to me in this case, as it's possibly the sort of behavioral response that might be observed due to a stochastic resonance like mechanism.

That's the idea that Wiseman was getting at as well with his protocol: trying to determine if Jaytee's going to the window was a predictor of PS coming home. That's the observation the family made that prompted the research in the first place. I don't doubt that Wiseman's protocols could be improved upon as well - but it is certainly inaccurate to call them arbitrary - as Sheldrake did.

I get why. Wiseman's conclusions in his paper were circumscribed and expressed reasonable caution and recommendations. It was the public overstatements about debunking that was the problem. They clearly pissed Sheldrake off and no doubt destroyed any prospect of further productive collaboration. Who knows what they could have accomplished had they continued to work together.
 
That's the idea that Wiseman was getting at as well with his protocol: trying to determine if Jaytee's going to the window was a predictor of PS coming home. That's the observation the family made that prompted the research in the first place. I don't doubt that Wiseman's protocols could be improved upon as well - but it is certainly inaccurate to call them arbitrary - as Sheldrake did.

I get why. Wiseman's conclusions in his paper were circumscribed and expressed reasonable caution and recommendations. It was the public overstatements about debunking that was the problem. They clearly pissed Sheldrake off and no doubt destroyed any prospect of further productive collaboration. Who knows what they could have accomplished had they continued to work together.
Are you talking about public overstatements of debunking by Wiseman or others?
 
Are you talking about public overstatements of debunking by Wiseman or others?

I can't remember the specifics, but I do recall that when I looked into this issue awhile back I saw at least one comment by wiseman that I thought was overstating things. Sheldrake mentions a specific reference in his paper at some conference but I'm not sure the exact quote.

I think its fair to say that people close to Wiseman have made quite strong comments that way overstated the conclusions and I haven't seen anywhere that Wiseman says "hold on, my protocols might lead to that conclusion with more study, but not based on the small sample in my experiment". Even if the strongest statements were by others I don't think Wiseman can escape blame on this. (it's not like arms reach media reports, these are Wiseman's close colleagues in the skeptical movement).

The upshot is, the attention moves from the reasoned comments in the paper to the more inflamatory off the cuff comments and productive discourse slams shut.
 
That's the idea that Wiseman was getting at as well with his protocol: trying to determine if Jaytee's going to the window was a predictor of PS coming home. That's the observation the family made that prompted the research in the first place. I don't doubt that Wiseman's protocols could be improved upon as well - but it is certainly inaccurate to call them arbitrary - as Sheldrake did.

I get why. Wiseman's conclusions in his paper were circumscribed and expressed reasonable caution and recommendations. It was the public overstatements about debunking that was the problem. They clearly pissed Sheldrake off and no doubt destroyed any prospect of further productive collaboration. Who knows what they could have accomplished had they continued to work together.

None of that bothers me, I'm only interested in the results of the experiments with Jaytee - and these do appear to be significant.

It's Wiseman's replication in every trial that nails it for me.
 
Last edited:
None of that bothers me, I'm only interested in the results of the experiments with Jaytee - and these do appear to be significant.

It's Wiseman's replication in every trial that nails it for me.

I'm curious as to the criteria you are using. You seem to be saying that since the patterns roughly match that this seals the deal on Sheldrake's hypothesis. But Wiseman's point seems to be that perhaps it does not. Here is the summary I posted on the old forum:

Arouet;n98402 said:
I was just re-reading the wiseman paper yesterday. Alex, you've often said that Wiseman's data matches Sheldrakes, but when I look at it I see the following:


Trial 1: PS starts returning home at: 21:00
Table 1: 20:58: Car pulls up, dog walks past, car leaves: 221s
21:04: Car pulls up: 394s
21:15 no obvious reasons 15s
21.16: car passes wndow: 76s
21.17: people walk by 10 s
21.20: PS and MS return



Trial 2: PS starts returning home at: 14:18
Table 2: 14:16: Fish van outside window: 169s
14:20: father returns from fish van
14:24: woman walks past, car pulls away from window: 205
14:29: PS returns


Trial 3: PS starts returning home at: 21:39
this one is a better result: dog stays there for no obvious reason

Trial 4: PS starts returning home at 10:45
Table 4: 10:45 looks promising but only stays for 10s then goes and pukes
10:55 back for 113s but its when dustbin men arrive in the street
10:57: dustbin men outside house
11:11: PS returns


So Alex, with the exception perhaps of three, there are certainly a lot of distractions outside the window at the time PS starts heading back. Unless I'm reading this wrong I don't see how this strongly leads one to a conclusion that the dog is psychic!

Wiseman's results may roughly match results in Sheldrake's trials but can we really point to these 4 trials as supportive of Sheldrake's hypothesis? If you don't consider Wiseman's methodology to be valid (which I accept is entirely possible), perhaps you could elaborate?
 
I'm curious as to the criteria you are using. You seem to be saying that since the patterns roughly match that this seals the deal on Sheldrake's hypothesis. But Wiseman's point seems to be that perhaps it does not. Here is the summary I posted on the old forum:



Wiseman's results may roughly match results in Sheldrake's trials but can we really point to these 4 trials as supportive of Sheldrake's hypothesis? If you don't consider Wiseman's methodology to be valid (which I accept is entirely possible), perhaps you could elaborate?

I don't know what '...seals the deal...' means, but the results of these experiments with Jaytee do appear to be significant.

Both Sheldrake and Wiseman's experiments met the threshold triggered criteria I suggested earlier... so that if Jaytee's time at the window (without obvious cause), exceeded a 1/3 of any rolling 10 minute period, you could say with some certainly that Pam had either... already set off, or, predict she would set off with 10 minutes.

Wiseman replicated Sheldrakes results as far as my criteria is concerned.
 
I don't know what '...seals the deal...' means, but the results of these experiments with Jaytee do appear to be significant.

Both Sheldrake and Wiseman's experiments met the threshold triggered criteria I suggested earlier... so that if Jaytee's time at the window (without obvious cause), exceeded a 1/3 of any rolling 10 minute period, you could say with some certainly that Pam had either... already set off, or, predict she would set off with 10 minutes.

Wiseman replicated Sheldrakes results as far as my criteria is concerned.

How do you interpret Wiseman's data to show jaytee at the window for more than 3.33 minutes without an obvious cause with PS coming home within 10 minutes? In almost all cases PS's coming home also coincides with there being obvious other causes at the window (at least looking at the summary I posted above.)
 
How do you interpret Wiseman's data to show jaytee at the window for more than 3.33 minutes without an obvious cause with PS coming home within 10 minutes? In almost all cases PS's coming home also coincides with there being obvious other causes at the window (at least looking at the summary I posted above.)

I've said that the results of these experiments with Jaytee do appear to be significant.
 
I've said that the results of these experiments with Jaytee do appear to be significant.

And I'm asking, in light of the data that Wiseman produced (and that I've summarized) how you reach that conclusion based on the criteria you suggested.
 
From the data. How else would I reach it.

And I'm trying to understand how you did so. When I look at the data (as I explained above in detail) I don't see the same pattern you apparently see and I cannot figure out how you got there. I was hoping you could elaborate, explain your calculations, maybe even post your full analysis if you have it handy.
 
And I'm trying to understand how you did so. When I look at the data (as I explained above in detail) I don't see the same pattern you apparently see and I cannot figure out how you got there. I was hoping you could elaborate, explain your calculations, maybe even post your full analysis if you have it handy.

http://www.sheldrake.org/files/pdfs/papers/SPR_Vol63.pdf (Fig 1.) which meets the threshold triggered criteria I suggested earlier.
 
http://www.sheldrake.org/files/pdfs/papers/SPR_Vol63.pdf (Fig 1.) which meets the threshold triggered criteria I suggested earlier.

I thought you must have been talking about an analysis you had done yourself, because that analysis does not, as far as I can tell, show what you describe. Sheldrakes graphs, as far as I know, do not only plot those time where there was no other apparent reason for Jaytee to have gone to the window. As I demonstrated above, at least in Wiseman's four trials, there were all sorts of other apparent reasons for Jaytee to have gone to the window, So it doesn't meet your criteria. Though if you see it differently maybe you could explain?
 
Back
Top