Critiques of Science as Currently Praticed

  • Thread starter Sciborg_S_Patel
  • Start date
Just going to paste some articles I've read in this thread:

Why Most Published Research Findings Are False


There is increasing concern that most current published research findings are false. The probability that a research claim is true may depend on study power and bias, the number of other studies on the same question, and, importantly, the ratio of true to no relationships among the relationships probed in each scientific field. In this framework, a research finding is less likely to be true when the studies conducted in a field are smaller; when effect sizes are smaller; when there is a greater number and lesser preselection of tested relationships; where there is greater flexibility in designs, definitions, outcomes, and analytical modes; when there is greater financial and other interest and prejudice; and when more teams are involved in a scientific field in chase of statistical significance. Simulations show that for most study designs and settings, it is more likely for a research claim to be false than true. Moreover, for many current scientific fields, claimed research findings may often be simply accurate measures of the prevailing bias. In this essay, I discuss the implications of these problems for the conduct and interpretation of research.

=-=-=

Bad Science Muckrakers Question the Big Science Status Quo

To make matters worse, private research dollars are being choked off by ill-conceived regulations, making researchers even more dependent on government grants, as Dr. Thomas Stossel at Harvard Medical School points out.

Stossel calls overly restrictive conflict of interest regulations “a damaging solution in search of a problem.” A self-described “typical academic socialist, totally living on grants for the first third of my career,” Stossel says his eyes were opened in 1987, when he was asked to serve on the scientific advisory board of Biogen (now Biogen IDEC), a fledgling biotech startup that went on to become a tremendous success. “I realized how fundamentally honest business people are compared to my academic colleagues, who’d run their grandmothers over for recognition.”

=-=-=

2014: Let the Light Shine In: Two big recent scientific results are looking shaky—and it is open peer review on the internet that has been doing the shaking

SCIENTISTS make much of the fact that their work is scrutinised anonymously by some of their peers before it is published. This “peer review” is supposed to spot mistakes and thus keep the whole process honest. The peers in question, though, are necessarily few in number, are busy with their own work, are expected to act unpaid—and are often the rivals of those whose work they are scrutinising. And so, by a mixture of deliberation and technological pressure, the system is starting to change. The internet means anyone can appoint himself a peer and criticise work that has entered the public domain. And two recent incidents have shown how valuable this can be.

The second claim came from cosmology. On March 17th researchers from the Harvard-Smithsonian Centre for Astrophysics, led by John Kovac, held a press conference at which they announced that they had discovered interesting patterns in the cosmic microwave background, a type of weak radiation left over from the universe’s earliest moments. They said they had spotted the signatures of primordial gravitational waves, ripples in space formed just after the Big Bang.

Once again, it was big news (including in The Economist). The existence of such waves would give strong support for the theory of inflation, which holds that the early universe underwent a brief burst of faster-than-light expansion. Inflation was put forward in the 1980s by theorists as a way to resolve various knotty problems with the standard theory of the Big Bang. But although it is widely assumed to be true, direct evidence that it happened had been lacking.

Dr Kovac and his colleagues made much of their data available online at the time, prompting hundreds of physicists to check their work. Doubts soon surfaced. The team’s claim to have spotted the waves relies on them having diligently scrubbed out every possible source of false positives. But doing that is hard, because the most likely culprit—interstellar dust—is poorly understood. Such diligence is made doubly difficult by the fact that, although several teams are hunting for primordial gravity waves, the glory of being the first to spot them means none is willing to share its data with the others.

The various online arguments culminated with the publication of an online paper by researchers from New York University, Princeton University and the Institute for Advanced Study (also in Princeton). This concluded that Dr Kovac’s data, which came from an Antarctic telescope called BICEP-2, may well have been contaminated by space dust, and that the purported gravitational waves may be much weaker than the team first claimed—if they exist at all.
 

"Everything We Know Is Wrong"

...Again and again, researchers are finding the same things, whether it's with observational studies, or even the "gold standard" Randomised Controlled Studies, whether it's medicine or economics. Nobody bothers to try to replicate most studies, and when they do try, the majority of findings don't stack up. The awkward truth is that, taken as a whole, the scientific literature is full of falsehoods.

Jolyon Jenkins reports on the factors that lie behind this. How researchers who are obliged for career reasons to produce studies that have "impact"; of small teams who produce headline-grabbing studies that are too statistically underpowered to produce meaningful results; of the way that scientists are under pressure to spin their findings and pretend that things they discovered by chance are what they were looking for in the first place. It's not exactly fraud, but it's not completely honest either. And he reports on new initiatives to go through the literature systematically trying to reproduce published findings, and of the bitter and personalised battles that can occur as a result...

=-=-=

Drug-testing rules broken by Canadian researchers

Top Canadian doctors running clinical trials have risked patient safety, failed to report serious side-effects suffered by their human test subjects and botched the scientific research of the drugs.


Top Canadian doctors running clinical drug trials failed to report serious side-effects suffered by their human test subjects.

The doctors, some of them esteemed researchers from Canada’s most prestigious hospitals and academic institutions, have also routinely broken rules designed to protect participants and botched research of new treatments.

Using records obtained through U.S. freedom of information legislation, a Star investigation has found the following problems in the system designed to ensure new drugs are safe and effective:

  • In 2012, a top Toronto cancer researcher failed to report a respiratory tract infection, severe vomiting and other adverse events.
  • A clinical trial run by an Alberta doctor reported that patients responded more favourably to the treatment than they actually did.
  • A Toronto hospital’s chief of medical staff ran a clinical trial of autistic children on a powerful antipsychotic, and he did not report side-effects suffered by four of the children.
  • And numerous doctors across the country failed to tell participants that one of the goals of the clinical trial was to test the safety of the drug they were taking.

=-=-=

The Weirdest People in the World

Broad claims about human psychology and behavior based on narrow samples from Western societies are regularly published. Are such species‐generalizing claims justified?

This review suggests not only substantial variability in experimental results across populations in basic domains, but that standard subjects are unusual compared with the rest of the species—outliers. The domains reviewed include visual perception, fairness, spatial reasoning, moral reasoning, thinking‐styles, and self‐concepts.

This suggests (1)caution in addressing questions of human nature from this slice of humanity, and (2) that
understanding human psychology will require broader subject pools.

We close by proposing ways to address these challenges.

=-=-=

The Retraction War

Scientists seek demigod status, journals want blockbuster results, and retractions are on the rise: is science broken?

....Retraction was meant to be a corrective for any mistakes or occasional misconduct in science but it has, at times, taken on a superhero persona instead. Like Superman, retraction can be too powerful, wiping out whole careers with a single blow. Yet it is also like Clark Kent, so mild it can be ignored while fraudsters continue publishing and receiving grants. The process is so wrought that just 5 per cent of scientific misconduct ever results in retraction, leaving an abundance of error in play to obfuscate the facts.

Scientists are increasingly aware of the amount of bad science out there – the word ‘reproducibility’ has become a kind of rallying cry for those who would reform science today. How can we ensure that studies are sound and can be reproduced by other scientists in separate labs?

=-=-=

Interesting editorial from Lancet

Can bad scientific practices be fixed? Part of the problem is that no-one is incentivised to be right. Instead, scientists are incentivised to be productive and innovative. Would a Hippocratic Oath for science help? Certainly don’t add more layers of research redtape. Instead of changing incentives, perhaps one could remove incentives altogether. Or insist on replicability statements in grant applications and research papers. Or emphasise collaboration, not competition. Or insist on preregistration of protocols. Or reward better pre and post publication peer review. Or improve research training and mentorship. Or implement the recommendations from our Series on increasing research value, published last year. One of the most convincing proposals came from outside the biomedical community. Tony Weidberg is a Professor of Particle Physics at Oxford. Following several high-profile errors, the particle physics community now invests great effort into intensive checking and rechecking of data prior to publication. By filtering results through independent working groups, physicists are encouraged to criticise. Good criticism is rewarded.
 
How Industry Manipulates Science and Gambles with Your Future

"
In their new book, Trust Us, We're Experts: How Industry Manipulates Science and Gambles with Your Future, Sheldon Rampton and John Stauber offer a chilling exposé on the manufacturing of "independent experts." Public relations firms and corporations have seized upon a slick new way of getting you to buy what they have to sell: Let you hear it from a neutral "third party," like a professor or a pediatrician or a soccer mom or a watchdog group. The problem is, these third parties are usually anything but neutral. They have been handpicked, cultivated, and meticulously packaged to make you believe what they have to say--preferably in an "objective" format like a news show or a letter to the editor. And in some cases, they have been paid handsomely for their "opinions."

For example:

  • You think that nonprofit organizations just give away their stamps of approval on products? Bristol-Myers Squibb paid $600,000 to the American Heart Association for the right to display AHA's name and logo in ads for its cholesterol-lowering drug Pravachol. Smith Kline Beecham paid the American Cancer Society $1 million for the right to use its logo in ads for Beecham's Nicoderm CQ and Nicorette anti-smoking ads.
  • You think that you're witnessing a spontaneous public debate over a national issue? When the Justice Department began antitrust investigations of the Microsoft Corporation in 1998, Microsoft's public relations firm countered with a plan to plant pro-Microsoft articles, letters to the editor, and opinion pieces all across the nation, crafted by professional media handlers but meant to be perceived as off-the-cuff, heart-felt testimonials by "people out there."
  • You think that a study out of a prestigious university is completely unbiased? In 1997, Georgetown University's Credit Research Center issued a study which concluded that many debtors are using bankruptcy as an excuse to wriggle out of their obligations to creditors. Former U.S. Treasury Secretary Lloyd Bentsen cited the study in aWashington Times column and advocated for changes in federal law to make it harder for consumers to file for bankruptcy relief. What Bentsen failed to mention was that the Credit Research Center is funded in its entirety by credit card companies, banks, retailers, and others in the credit industry; that the study itself was produced with a $100,000 grant from Visa USA and MasterCard International Inc.; and that Bentsen himself had been hired to work as a credit-industry lobbyist.
  • You think that all grassroots organizations are truly grassroots? In 1993, a group called Mothers Opposing Pollution (MOP) appeared, calling itself "the largest women's environmental group in Australia, with thousands of supporters across the country." Their cause: A campaign against plastic milk bottles. It turned out that the group's spokesperson, Alana Maloney, was in truth a woman named Janet Rundle, the business partner of a man who did P.R. for the Association of Liquidpaperboard Carton Manufacturers-the makers of paper milk cartons.
  • You think that if a scientist says so, it must be true? In the early 1990s, tobacco companies secretly paid thirteen scientists a total of $156,000 to write a few letters to influential medical journals. One biostatistician received $10,000 for writing a single, eight-paragraph letter that was published in the Journal of the American Medical Association. A cancer researcher received $20,137 for writing four letters and an opinion piece to the Lancet, theJournal of the National Cancer Institute, and the Wall Street Journal. Nice work if you can get it, especially since the scientists didn't even have to write the letters themselves. Two tobacco-industry law firms were available to do the actual drafting and editing.
Rampton and Stauber reveal many more such examples of "perception management"--all of them orchestrated to make us buy or believe whatever the "independent expert" is pushing. They also explore the underlying assumptions about human psychology--e.g., "the public must be manipulated for its own good"--that make this kind of subliminal hard-sell possible.
"
 
From 2012: In cancer science, many "discoveries" don't hold up

During a decade as head of global cancer research at Amgen, C. Glenn Begley identified 53 "landmark" publications -- papers in top journals, from reputable labs -- for his team to reproduce. Begley sought to double-check the findings before trying to build on them for drug development.

Result: 47 of the 53 could not be replicated. He described his findings in a commentary piece published on Wednesday in the journal Nature.

"It was shocking," said Begley, now senior vice president of privately held biotechnology company TetraLogic, which develops cancer drugs. "These are the studies the pharmaceutical industry relies on to identify new targets for drug development. But if you're going to place a $1 million or $2 million or $5 million bet on an observation, you need to be sure it's true. As we tried to reproduce these papers we became convinced you can't take anything at face value."

The failure to win "the war on cancer" has been blamed on many factors, from the use of mouse models that are irrelevant to human cancers to risk-averse funding agencies. But recently a new culprit has emerged: too many basic scientific discoveries, done in animals or cells growing in lab dishes and meant to show the way to a new drug, are wrong.

Begley's experience echoes a report from scientists at Bayer AG last year. Neither group of researchers alleges fraud, nor would they identify the research they had tried to replicate.

But they and others fear the phenomenon is the product of a skewed system of incentives that has academics cutting corners to further their careers.

George Robertson of Dalhousie University in Nova Scotia previously worked at Merck on neurodegenerative diseases such as Parkinson's. While at Merck, he also found many academic studies that did not hold up.

"It drives people in industry crazy. Why are we seeing a collapse of the pharma and biotech industries? One possibility is that academia is not providing accurate findings," he said.

THE BEST STORY

Other scientists worry that something less innocuous explains the lack of reproducibility.

Part way through his project to reproduce promising studies, Begley met for breakfast at a cancer conference with the lead scientist of one of the problematic studies.

"We went through the paper line by line, figure by figure," said Begley. "I explained that we re-did their experiment 50 times and never got their result. He said they'd done it six times and got this result once, but put it in the paper because it made the best story. It's very disillusioning."

Such selective publication is just one reason the scientific literature is peppered with incorrect results.


For one thing, basic science studies are rarely "blinded" the way clinical trials are. That is, researchers know which cell line or mouse got a treatment or had cancer. That can be a problem when data are subject to interpretation, as a researcher who is intellectually invested in a theory is more likely to interpret ambiguous evidence in its favor.

The problem goes beyond cancer.
 
Psychology’s Replication Crisis Can’t Be Wished Away

Technical discussion aside, I want to make two points here. First, the Reproducibility Project is far from the only line of evidence for psychology’s problems. There’s the growing list of failures to replicate textbook phenomena. There’s publication bias—the tendency to only publish studies with positive results, while dismissing those with negative ones. There’s evidence ofquestionable research practices that are widespread and condoned.

Second, it can be very easy to see this as an academic spat about turgid statistical matters like p-values, and degrees of freedom, and publication bias. It’s not. It’s about people’s lives. Their careers. Their passions. Their futures. Of all the things I’ve read (or written) about the (alleged) replicability crisis, few have driven this point home better than a post from the Michael Inzlicht at the University of Toronto, published Monday. It is unguarded, humane, and heartbreaking.
 
How Industry Manipulates Science and Gambles with Your Future
In their new book, Trust Us, We're Experts: How Industry Manipulates Science and Gambles with Your Future, Sheldon Rampton and John Stauber offer a chilling exposé on the manufacturing of "independent experts." Public relations firms and corporations have seized upon a slick new way of getting you to buy what they have to sell: Let you hear it from a neutral "third party," like a professor or a pediatrician or a soccer mom or a watchdog group. The problem is, these third parties are usually anything but neutral. They have been handpicked, cultivated, and meticulously packaged to make you believe what they have to say--preferably in an "objective" format like a news show or a letter to the editor. And in some cases, they have been paid handsomely for their "opinions."
...


Rampton and Stauber reveal many more such examples of "perception management"--all of them orchestrated to make us buy or believe whatever the "independent expert" is pushing. They also explore the underlying assumptions about human psychology--e.g., "the public must be manipulated for its own good"--that make this kind of subliminal hard-sell possible.
"
That PR firms regularly engage in this type of behavior needs to more widely known. It is a terribly cynical and manipulative practice being used more and more.
 

2016:

Cancer Research Is Broken: There’s a replication crisis in biomedicine—and no one even knows how deep it runs.

In February, the White House announced its plan to put $1 billion toward a similar objective—a “Cancer Moonshot” aimed at making research more techy and efficient. But recent studies of the research enterprise reveal a more confounding issue, and one that won’t be solved with bigger grants and increasingly disruptive attitudes. The deeper problem is that much of cancer research in the lab—maybe even most of it—simply can’t be trusted. The data are corrupt. The findings are unstable. The science doesn’t work.

In other words, we face a replication crisis in the field of biomedicine, not unlike the one we’ve seen in psychology but with far more dire implications. Sloppy data analysis, contaminated lab materials, and poor experimental design all contribute to the problem. Last summer, Leonard P. Freedman, a scientist who worked for years in both academia and big pharma, published a paper with two colleagues on “the economics of reproducibility in preclinical research.” After reviewing the estimated prevalence of each of these flaws and fault-lines in biomedical literature, Freedman and his co-authors guessed that fully half of all results rest on shaky ground, and might not be replicable in other labs. These cancer studies don’t merely fail to find a cure; they might not offer any useful data whatsoever. Given current U.S. spending habits, the resulting waste amounts to more than $28 billion. That’s two dozen Cancer Moonshots misfired in every single year. That’s 100 squandered internet tycoons.

How could this be happening? At first glance it would seem medical research has a natural immunity to the disease of irreproducible results. Other fields, such as psychology, hold a more tenuous connection to our lives. When a social-science theory turns to be misguided, we have only to update our understanding of the human mind—a shift of attitude, perhaps, as opposed to one of practice. The real-world stakes are low enough that strands of falsehood might sustain themselves throughout the published literature without having too much impact. But when a cancer study ends up at the wrong conclusion—and an errant strand is snipped—people die and suffer, and amultibillion-dollar industry of treatment loses money, too. I always figured that this feedback would provide a self-corrective loop, a way for the invisible hands of human health and profit motive to guide the field away from bad technique.

Alas, the feedback loop doesn’t seem to work so well, and without some signal to correct them, biologists get stuck in their bad habits, favoring efficiency in publication over the value of results. They also face a problem more specific to their research: The act of reproducing biomedical experiments—I mean, just attempting to obtain the same result—takes enormous time and money, far more than would be required for, say, studies in psychology. That makes it very hard to diagnose the problem of reproducibility in cancer research and understand its scope and symptoms. If we can’t easily test the literature for errors, then how are we supposed to fix it up?

Not all these problems are unique to biomedicine. Brian Nosek points out that in most fields, career advancement comes with publishing the most papers and the flashiest papers, not the most well-documented ones. That means that when it comes to getting ahead, it’s not really in the interest of any researchers—biologists and psychologists alike—to be comprehensive in the reporting of their data and procedures. And for every point that one could make about the specific problems with reproducing biology experiments—the trickiness of identifying biological reagents, or working out complicated protocols—Nosek offers an analogy from his field. Even a behavioral study of local undergraduate volunteers may require subtle calibrations, careful delivery of instructions, and attention to seemingly trivial factors such as the time of day. I thought back to what the social psychologist Roy Baumeister told me about his own work, that there is “a craft to running experiments,” and that this craft is sometimes bungled in attempted replications.

That may be so, but I’m still convinced that psychology has a huge advantage over cancer research, when it comes to self-diagnosis. You can see it in the way each field has responded to its replication crisis. Some psychology labs are now working to validate andreplicate their own research before it’s published. Some psychology journals are requiring researchers to announce their research plans and hypotheses ahead of time, to help prevent bias. And though its findings have been criticized, the Reproducibility Project for Psychology has already been completed. (This openness to dealing with the problem may explain why the crisis in psychology has gotten somewhat more attention in the press.)

The biologists, in comparison, have been reluctant or unable to pursue even very simple measures of reform. Leonard Freedman, the lead author of the paper on the economics of irreproducibility, has been pushing very hard for scientists to pay attention to the cell lines that they use in research. These common laboratory tools are often contaminated with hard-to-see bacteria, or else with other, unrelated lines of cells. One survey found such problems may affect as many as 36 percent of the cell lines used in published papers. Freedman notes that while there is a simple way to test a cell line for contamination—a genetic test that costs about a hundred bucks—it’s almost never used. Some journals recommend the test, but almost none require it. “Deep down, I think they’re afraid to make the bar too high,” he said.
 
This is a wonderful thread because I hope it illustrates to all, exactly how bad science has become. I confess, some of the articles quoted above take even my breath away, and I consider myself pretty negative about modern science.

Part way through his project to reproduce promising studies, Begley met for breakfast at a cancer conference with the lead scientist of one of the problematic studies.

"We went through the paper line by line, figure by figure," said Begley. "I explained that we re-did their experiment 50 times and never got their result. He said they'd done it six times and got this result once, but put it in the paper because it made the best story. It's very disillusioning."

Such selective publication is just one reason the scientific literature is peppered with incorrect results.

I think maybe this one example indicates the sheer mess that we have now. Back when I actually did science, there was a certain honour involved in doing the job properly - now it has become a vast game of chasing grants and publishing anything just to keep climbing the greasy pole! Think for a moment - how can any honest researcher possibly compete against people behaving like that - they just get squeezed out!

Now add in another factor that often operates - the need to hold to a pseudo consensus about all sorts of subjects, such as the consequences of having high cholesterol, or the meaning of the red shifts of distant galaxies, orthe properties of consciousness. These consensuses can have a huge effect - data may get fiddled in order to conform, awkward work may get suppressed - either by the authors or the peer reviewers, people can become outcasts for thinking the unthinkable.

These exposes tend to be in biological science, but that is because ordinary people care about these areas more because they impact on medicine, but there is absolutely no reason to think that other areas are any better - except where some tangible end product must come out of the end - a faster chip, or an EBOLA vaccine that is needed urgently.
Many areas of science are impossibly expensive to repeat - for example, we aren't going to get a second LHC designed by a different team and running at the same energy. This means that high energy physics and cosmology must be even less reliable - because those involved don't need to fear direct contradiction - we just don't get a window into what is going on, so we go on chucking money at these projects.

I seriously believe that by sifting through reports of anomalous consciousness - as we do here - we probably get closer to the truth than scientists ever can, because we don't have to cleave to a party line, and we are only really here because we want the truth.

Maybe the next time Alex interviews a sceptical science type, he should arm himself with some of these studies into the quality of science to challenge some of what they say!

David
 
Last edited:
I seriously believe that by sifting through reports of anomalous consciousness - as we do here - we probably get closer to the truth than scientists ever can, because we don't have to cleave to a party line, and we are only really here because we want the truth.

By reports do you mean scientific papers? I find myself less and less interested in the data-debate surrounding Psi.

I think it's important for the general debate, and it's good to see some progression in the academic setting, but unless everyone sits down and reads some statistics seems very hard to gauge where things stand.

Examination of data seems like something people would do offline?

=-=-=

When Media Scholars and Anti-Media Advocacy Groups Get Too Close


n the last few weeks the Parents Television Council (PTC), a controversial and aggressive anti-media advocacy group, has been pushing the Federal Communications Commission and congress to hold hearings on revamping the TV ratings. The PTC appears to be allied mainly with right-wing religious or social conservative groups in its effort, expressing concern that the current TV ratings allow for too much objectionable content. Echoing this concern, a group of 28 scholars wrote to the FCC last week to support hearings claiming that objectionable content could be harmful to minors. In order to suggest that they were not affiliated with any other organization, the authors of the letter explicitly noted that they were “writing as independent private citizens. . .” However, one recent news report has discovered that this statement may have been written at the request of the PTC. So it isn’t too surprising to learn that the statement goes on to claim that research has “. . . documented harmful effects of violent content; indeed, this is by far the most thoroughly researched area of media effects.”

Yet, these claims are patently false. There certainly are numerous studies of media effects, but whether considering violent content or other objectionable content, the evidence has always been inconsistent, controversial and often plagued by methodological flaws.
In their letter to the FCC the 28 academics engage in a dubious practice called citation bias...citing only studies supporting their personal views, and ignoring studies that inconveniently do not. In one recent essay, scholars Thomas Babor and Thomas McGovern specifically refer to this unscientific behavior as one of the “seven deadly sins“ of scientific publishing. Reports of close coordination with a censorious anti-media advocacy group are also worrisome. In areport from 2013, I warned scholars against such activities, noting these represented a conflict-of-interest similar to working closely with media industries.
 
By reports do you mean scientific papers? I find myself less and less interested in the data-debate surrounding Psi.

I think it's important for the general debate, and it's good to see some progression in the academic setting, but unless everyone sits down and reads some statistics seems very hard to gauge where things stand.
Well there is weak ψ and strong ψ. I was thinking more of the natural ψ phenomena and NDE's, plus other phenomena such as those induced by entheogens. Also Some of Julie Beischel's results with mediums seem way beyond statistical quibbles.

The problems with statistics seem to be not really the sheer formulae to use, but the endless quibbles about whether people changed the endpoints, discarded results in drawers etc!

I think it is worth remembering that Wiseman had to concede that ESP would be accepted as real if it were not so controversial. When you think that people in OBE's or NDE's (both of which certainly exist - whatever they mean) frequently report communication by ESP, it doesn't seem so implausible that a weak remnant is present in normal life..

The biggest distortion that conventional science imposes on all this, is to absolutely refuse to see the connections - ESP in OBE's and NDE's is anecdotal, ESP that tells someone when a loved one is in danger far away, is also anecdotal, and ESP in the laboratory might, just might, perhaps be swept away, so just flush the lot down the toilet!

However, this entire thread reminds us that scientists often argue things in bad faith.

David
 
There doesn't seem much to critique. Max found the most recent paper but he felt it suffered from a lack of important detail:
http://www.skeptiko-forum.com/threads/beischel-podcast.2542/#post-77368
Thanks. I meant in addition to Max's analysis on this forum, which I had already seen. :-) I'm not sure he specifically said "lack of important detail" but rather that the blinding description was spread out in more than one paper and difficult to follow. The blinding discussed on pages 138 -139 reference this paper:

http://windbridge.org/papers/BeischelJP71Methods.pdf

Cheers,
Bill
 
The subtext to this video:

So this is more a critique on science reporting than one on science itself, something that is a pet hate for skeptics too.

He goes into problems regarding pressures to publish, p-hacking, how even good studies can get BS results, & under replication as well.
 
He goes into problems regarding pressures to publish, p-hacking, how even good studies can get BS results, & under replication as well.
He also talks about publishing in less than legitimate scientific journals and ignoring consensus science.
At the end he summarizes very succinct:
 
He also talks about publishing in less than legitimate scientific journals and ignoring consensus science.
At the end he summarizes very succinct:

It is odd that he spends the first part of the video talking about replication issues and p-hacking and then appeals to the authority of a "consensus" at the end.

I'm neutral on the two topics that summary brings but I kinda doubt John Oliver could explain why people who think vaccines cause autism or why climate change isn't real are wrong. I wonder if he's read a single paper on either subject or has the mathematical knowledge to assess research studies.

Additionally if you go back to earlier posts in this thread it's noted even supposedly top journals have published problematic data. I suspect words like "legitimate" and "consensus" shouldn't be taken seriously without a massive replication test across the scientific fields.

Would be good to rank the fields in terms of proper scientific conduct, as I'm curious where parapsychology falls in comparison.
 
Back
Top