Critiques of Science as Currently Praticed

  • Thread starter Sciborg_S_Patel
  • Start date
S

Sciborg_S_Patel

The Truthiness Of Scientific Research
The topic itself is not new. For decades, there have been rumors about famous historical scientists like Newton, Kepler, and Mendel. The charge was that their research results were too good to be true. They must have faked the data, or at least prettied it up a bit. But Newton, Kepler, and Mendel nonetheless retained their seats in the Science Hall of Fame. The usual reaction of those who heard the rumors was a shrug. So what? They were right, weren't they?

What's new is that nowadays everyone seems to be doing it, and they're not always right. In fact, according to John Ioannidis, they're not even right most of the time.

John Ioannidis is the author of a paper titled "Why Most Published Research Findings Are False," which appeared in a medical journal in 2005. Nowadays this paper is described as "seminal" and "famous," but at first it received little attention outside the field of medicine, and even medical researchers didn't seem to be losing any sleep over it.

Then people in my own field, psychology, began to voice similar doubts. In 2011, the journalPsychological Science published a paper titled "False-positive psychology: Undisclosed flexibility in data collection and analysis allows presenting anything as significant." In 2012, the same journal published a paper on "the prevalence of questionable research practices." In an anonymous survey of more than 2000 psychologists, 53 percent admitted that they had failed to report all of a study's dependent measures, 38 percent had decided to exclude data after calculating the effect it would have on the outcome, and 16 percent had stopped collecting data earlier than planned because they had gotten the results they were looking for.

The final punch landed in August, 2015. The news was published first in the journal Scienceand quickly announced to the world by the New York Times, under a title that was surely facetious: "Psychologists welcome analysis casting doubt on their work." The article itself painted a more realistic picture. "The field of psychology sustained a damaging blow," it began. "A new analysis found that only 36 percent of findings from almost 100 studies in the top three psychology journals held up when the original experiments were rigorously redone." On average, effects found in the replications were only half the magnitude of those reported in the original publications.

Why have things gone so badly awry in psychological and medical research? And what can be done to put them right again?

I think there are two reasons for the decline of truth and the rise of truthiness in scientific research. First, research is no longer something people do for fun, because they're curious. It has become something that people are required to do, if they want a career in the academic world. Whether they enjoy it or not, whether they are good at it or not, they've got to turn out papers every few months or their career is down the tubes. The rewards for publishing have become too great, relative to the rewards for doing other things, such as teaching. People are doing research for the wrong reasons: not to satisfy their curiosity but to satisfy their ambitions.

There are too many journals publishing too many papers. Most of what's in them is useless, boring, or wrong.

The solution is to stop rewarding people on the basis of how much they publish. Surely the tenure committees at great universities could come up with other criteria on which to base their decisions!

The second thing that has gone awry is the vetting of research papers. Most journals send out submitted manuscripts for review. The reviewers are unpaid experts in the same field, who are expected to read the manuscript carefully, make judgments about the importance of the results and the validity of the procedures, and put aside any thoughts of how the publication of this paper might affect their own prospects. It's a hard job that has gotten harder over the years, as research has become more specialized and data analysis more complex. I propose that this job should be performed by paid experts—accredited specialists in the analysis of research. Perhaps this could provide an alternative path into academia for people who don't particularly enjoy the nitty-gritty of doing research but who love ferreting out the flaws and virtues in the research of others.

In Woody Allen's movie "Sleeper," set 200 years in the future, a scientist explains that people used to think that wheat germ was healthy and that steak, cream pie, and hot fudge were unhealthy—"precisely the opposite of what we now know to be true." It's a joke that hits too close to home. Bad science gives science a bad name.

Whether wheat germ is or isn't good for people is a minor matter. But whether people believe in scientific research or scoff at it is of crucial importance to the future of our planet and its inhabitants.
 
Ditto for almost all science. The only areas that may be exempt are areas of research related to specific immediate goals. Research on a faster chip has to deliver a faster chip with a decent yield, that works reliably. Research on dark matter distribution in the cosmos, evidence for strings in the early universe deduced from the cosmic wave background, evidence for a particle that only lives for 10^(-25) sec (the Higgs), evidence for Global Warming, etc etc - all these must be utterly suspect.

David
 

My take:
People are doing research for the wrong reasons: not to satisfy their curiosity but to satisfy their ambitions.

There are too many journals publishing too many papers. Most of what's in them is useless, boring, or wrong.

The solution is to stop rewarding people on the basis of how much they publish. Surely the tenure committees at great universities could come up with other criteria on which to base their decisions!

Yes!!!

. I propose that this job should be performed by paid experts—accredited specialists in the analysis of research.
Not good enough. In fact not much different from the current status quo.

But whether people believe in scientific research or scoff at it is of crucial importance to the future of our planet and its inhabitants.
No. Though I do lean to believing that the greatest benefit would be in most individuals finding their own paths and not blindly kowtowing to info from authorities. Whether those authorities wear the mantle of science or any other religion.
 
Posted on Kastrup's forum, may have already been seen here but figure I'd mention it:

The Cold Fusion Horizon
: Is cold fusion truly impossible, or is it just that no respectable scientist can risk their reputation working on it?

So, as a matter of sociology, it is easy to see why Rossi gets little serious attention; why an interview with Darden associates him with scientific chicanery; and why, I hope, some of you are having doubts about me for writing on the subject in a way that indicates that I am prepared to consider it seriously. (If so, hold that attitude. I want to explain why I take it to reflect a pathology in our present version of the scientific method. My task will be easier if you are still suffering from the symptoms.)

Sociology is one thing, but rational explanation another. It is very hard to extract from this history any satisfactory justification for ignoring recent work on LENR. After all, the standard line is that the rejection of cold fusion in 1989 turned on the failure to replicate the claims of Fleischmann and Pons. Yet if that were the real reason, then the rejection would have to be provisional. Failure to replicate couldn’t possibly be more than provisional – empirical science is a fallible business, as any good scientist would acknowledge. In that case, well-performed experiments claiming to overturn the failure to replicate would certainly be of great interest.

What if the failure to replicate wasn’t crucial after all? What if we already knew, on theoretical grounds alone, that cold fusion was impossible? But this would make a nonsense of the fuss over the failure to reproduce Fleischmann and Pons’ findings. And in any case, it is simply not true. As I said at the beginning, what physicists actually say (in my experience) is that although LENR is highly unlikely, we cannot say that it is impossible. We know that the energy is in there, after all.
 
http://www.collective-evolution.com...ant-breakthrough-on-a-nuclear-fusion-machine/

Cold Fusion?

Cold fusion is a certain type of nuclear reaction that occurs at or near room temperature. In years past, it was studied as theoretical and hypothetical, but scientists all over the world have attested to the possibility of cold fusion becoming a reality and the tremendous implications it can have for clean energy generation. It is a form of energy generated when hydrogen interacts with various metals, like nickel and palladium. It has been subject to a large amount of criticism and opposition, and while many remain skeptical, a number of distinguished scientists have confirmed its reality.

Cold fusion would also eliminate the modern day energy industry, and this is not just a crazy theory; hundreds of people in over 12 countries have been investigating the process with success. Thousands of papers have been published and are available for review at http://lenr-canr.org/.

A paper published by Martin Fleischmann and Stanley Pons, for example, claimed to successfully demonstrate cold fusion. (source) It is highly controversial, and you can learn more about it by listening to this lecture given by MIT professor Peter Hagelstein. He outlines that the primary implication of the Fleischmann-Pons experiment is that there may be new physics which allow for clean nuclear energy production.

A few years ago, a group of scientists led by physics professor Yoshiaki Arata of Osaka University in Japan claimed to have made a successful demonstration of cold fusion.

Below is a video of Eugene Mallove, who held a BS and MS degree in aeronautical and astronautical engineering from MIT and a doctorate in environmental health sciences from Harvard University. He was also the chief science writer at the MIT news office at the time of the (supposed) first cold fusion breakthrough in 1989.

It’s also important to mention that new documents obtained via the Freedom of Information Act (FOIA) have revealed how the Patent Office has been using a secret system to withhold the approval of some applications.

This 50-page document was obtained by Kilpatrick Towsend & Stockton, LLP, who commonly represent major tech companies that include Apple, Google, and Twitter (to name a few). You can view that entire document HERE.

It’s also important to note (as reported by the Federation of American Scientists) that there were over 5,000 inventions that were under secrecy orders at the end of Fiscal Year 2014, which marked the highest number of secrecy orders in effect since 1994. (source)

As Steven Aftergood from the Federation of American Scientists reports:

Thus, the 1971 list indicates that patents for solar photovoltaic generators were subject to review and possible restriction if the photovoltaics were more than 20% efficient. Energy conversion systems were likewise subject to review and possible restriction if they offered conversion efficiencies “in excess of 70-80%.” (source)

It’s something to think about.
 
I think it is clear that "Cold Fusion" was suppressed for some reason(s). Even if the experiments were wrong, and CF is not possible, the speed of the refutation gave the game away. You don't replicate or refute another's experiment in a matter of weeks.

Possible reasons:

1) It would make the very expensive hot fusion research pointless (and it was Culham that did one of the major refutations).

2) Conceivably the technology could be used to make a very powerful bomb.

3) Assorted other energy interests didn't want the competition.

4) Depending on why the Global Warming scam was being concocted, Cold Fusion might have made it irrelevant. E.g. some suggest that it CAGW is just a way of letting the Third World catch up by hobbling the advanced nations.

If any fusion occurs at all - even on a scale that could not be exploited - it would be of enormous interest to physicists and chemists. It is extraordinary how it has been almost suppressed.

Look here for a lot more details including some lectures from MIT:

http://coldfusionnow.org/interviews/

David
 
Cold fusion was suppressed by hot fusion scientists who were afraid of competition. The energy companies took a different approach to potential competition, they began their own research programs on cold fusion.
 
Old Aeon article I just remembered:

Science fictions

Scientists can be notoriously dismissive of other disciplines, and one of the subjects that suffers most at their hands is history. That suggestion will surprise many scientists. ‘But we love history!’ they’ll cry. And indeed, there is no shortage of accounts from scientists of the triumphant intellectual accomplishments of Einstein, Darwin, Newton, Galileo, and so on. They name institutes and telescopes after these guys, making them almost secular saints of rationalism.

And that’s the problem. All too often, history becomes a rhetorical tool bent into a shape that serves science, or else a source of lively anecdote to spice up the introduction to a talk or a book. Oh, that Mendeleev and his dream of a periodic table, that Faraday forecasting a tax on electricity!

I don’t wish to dismiss the value of a bit of historical context. But it’s troubling that the love of a good story so often leads scientists to abandon the rigorous attitude to facts that they exhibit in their own work. Most worrisome of all is the way these tales from science history become shoehorned into a modern narrative — so that, say, the persecution of Galileo shows how religion is the enemy of scientific truth.
 
Power failure: why small sample size undermines the reliability of neuroscience

A study with low statistical power has a reduced chance of detecting a true effect, but it is less well appreciated that low power also reduces the likelihood that a statistically significant result reflects a true effect. Here, we show that the average statistical power of studies in the neurosciences is very low. The consequences of this include overestimates of effect size and low reproducibility of results. There are also ethical dimensions to this problem, as unreliable research is inefficient and wasteful. Improving reproducibility in neuroscience is a key priority and requires attention to well-established but often ignored methodological principles.
 
Cross-cultural studies of toddler self-awareness have been using an unfair test

There's a simple and fun way to test a toddler's self-awareness. You make a red mark (or place a red sticker) on their forehead discreetly, and then you see what happens when they look in a mirror. If they have a sense of self – that is, if they recognise themselves as a distinct entity in the world – then they will see that there is a strange red mark on their face and attempt to touch it or remove it.

This is called the "mirror self-recognition test" (it's used to test self-awareness in animals too) and by age two most kids "pass" the test, at least in Western countries. Several studies have suggested that the ability to pass the test is delayed, sometimes by years, in non-Western cultures, such as rural India and Cameroon, Fiji and Peru. But now a study in Developmental Science says this may be because the mirror test is culturally biased. Using a more physical and social self-awareness test, Josephine Ross at the University of Dundee and her colleagues actually find more precocious performance in a non-Western (Zambian) group of toddlers.

Another sign as to the potential fragility of psychology as a scientific discipline.
 
Brian Whitworth on why physics is a "Hollow Science", taken from Quantum Realism, Chapter 1: The physical world as a virtual reality

"We take our world to be an objective reality, but is it? The assumption that the physical world exists in and of itself has struggled to assimilate the findings of modern physics for some time now. An objective space and time should just "be", but space contracts and time dilates in our world. Objective things should just inherently exist, but electrons are probability of existence smears that spread, tunnel, superpose and entangle in physically impossible ways. Cosmology now says that the entire physical universe just popped up, out of nothing about fourteen billion years ago. This is not how an objective reality should behave!"

=-=-=

"In modern physics strange theories are routine, e.g. in many-worlds theory each quantum event divides all reality, so everything that can happen does happen, in an inconceivable multiverse of parallel worlds (Everett, 1957). In the inflationary model, the physical universe is just one of many bubble universes (Guth, 1998) and string theory has six extra dimensions curled up and hidden from view. In M-theory, the universe floats on a fifth dimension “brane” we can’t see (Gribbin, 2000) p177-180 and others suggest we are one of two universes that collide and retreat in an eternal cycle (J. Khoury, 2001). The days when physics just described the physical world we see are long gone.

Yet the findings of physics are equally strange: the sun bends light by curving the space around it; the earth’s gravity slows down time; and atomic clocks tick faster on tall buildings than they do on the ground. Movement also slows down time, so an atomic clock on an aircraft ticks slower than a synchronized one on the ground (Hafele & Keating, 1972), and moving objects become heavier with speed as well. In our world, space, time and mass vary but the speed of light is strangely constant.

If relativity is strange then quantum theory is even stranger: in Young's experiment one electron goes through two slits at once to interfere with itself; entangled photons ignore speed of light limits; the vacuum of space exerts pressure; and gamma radiation is entirely random, i.e. physically uncaused. Einstein, who was as open to new ideas as anyone, thought quantum theory made no sense, and it doesn’t"

=-=-=

"...There are equations, proofs and applications, but the models that work make no physical sense, e.g. in Feynman's sum over histories an electron travels all possible paths between two points at once, but how can one electron do that? Theory should increase understanding, but in physics it seems to take it away. In wave-particle duality particles morph into waves, denying the very sense of what waves and particles are. Given a choice between meaning and mathematics, physics chose the latter and it shows. Quantum theory still isn’t taught in high schools because who can teach what makes no sense? Modern physics is a mathematical feast that at its core is entirely empty of meaning. It is a hollow science, built on impressive equations about quantum states that everyone agrees don’t exist! And physics has chosen this way of no meaning as a deliberate strategy..."


=-=-=

"It is not generally realized that the new structures of quantum theory and relativity are built on the old foundation of physical realism. If the physical world is real, trying to smash matter into its basic bits in particle accelerators makes sense. Yet the idea of a continuous universe made up of elementary point particles makes no more sense than a complete universe that always was. An object with an inherent mass needs a substance that extends in space. So it has left and right parts that by the same logic have still finer parts, and so on ad infinitum. The current response is that the universe consists of point particles with no extent, but how can something with no extent have mass? And since a billion points of no extent take up no more space than one, how then do extended objects form? It was then necessary to invent invisible fields continuous in space to keep these “points of no extent” apart by force. Finally, as every force needs a particle cause, the fields had to act by creating virtual particle agents, e.g. virtual photons. This masterpiece of circularity is immune to science, as a virtual photon is just a physical photon that can never be observed, as it is created and destroyed in the effect instant. Only physicists can see them, in equations and Feynman diagrams, which is good enough.

All was well, until new effects like neutron decay implied new forces and new invisible fields whose virtual particles had mass. The solution, in what was by now a well-oiled machine, was that another field created the virtual particles of the first field, and so the Higgs search began. The Higgs boson is the virtual particle created by an invisible field to explain another virtual particle created by another invisible field to explain an actual effect (neutron decay). Given dark energy and dark matter, it explains at best 4% of the mass of the universe, but the standard model needs it, so when after fifty years CERN found a million, million, million, millionth of a second signal in the possible range, physics was relieved. There is no evidence this “particle” has any effect on mass at all, but the standard model survives."
 
As drug industry’s influence over research grows, so does the potential for bias

For drugmaker GlaxoSmithKline, the 17-page article in the New England Journal of Medicine represented a coup.

The 2006 report described a trial that compared three diabetes drugs and concluded that Avandia, the company’s new drug, performed best.

“We now have clear evidence from a large international study that the initial use of [Avandia] is more effective than standard therapies,” a senior vice president of GlaxoSmithKline, Lawson Macartney, said in a news release.

What only careful readers of the article would have gleaned is the extent of the financial connections between the drugmaker and the research. The trial had been funded by GlaxoSmithKline, and each of the 11 authors had received money from the company. Four were employees and held company stock. The other seven were academic experts who had received grants or consultant fees from the firm.

Whether these ties altered the report on Avandia may be impossible for readers to know. But while sorting through the data from more than 4,000 patients, the investigators missed hints of a danger that, when fully realized four years later, would lead to Avandia’s virtual disappearance from the United States:

The drug raised the risk of heart attacks.
 
Posted on Kastrup's forum, may have already been seen here but figure I'd mention it:

The Cold Fusion Horizon
: Is cold fusion truly impossible, or is it just that no respectable scientist can risk their reputation working on it?

Is the cold fusion egg about to hatch?

Three months ago I wrote an essay in Aeon about intriguing developments in low-energy nuclear reactions (LENR), a controversial field that traces its origins to the claims of ‘cold fusion’ by Martin Fleischmann and Stanley Pons in 1989. Cold fusion itself is widely regarded as discredited, yet there are several recent reports of LENR devices producing commercially useful amounts of heat. As David Bailey and Jonathan Borwein have pointed out in HuffPost Science, it seems increasingly improbable that all these findings are the result of fraud or error, as skeptics assert. But the only remaining alternative – that science simply made the wrong call when it dismissed cold fusion – is still almost invisible in serious scientific conversations and in the mainstream media.

Why is this possibility so broadly ignored? I suggested that it is because LENR is caught in what I called a ‘reputation trap’. Cold fusion has had such a bad name that scientists and journalists put their reputations at risk if they dare to express an interest in LENR. So a fascinating story goes largely unnoticed and unreported.
 
Is the Scientific Process Broken?

The scientific process is broken. The tenure process, “publish or perish” mentality, and the insufficient review process of academic journals mean that researchers spend less time solving important puzzles and more time pursuing publication. But that wasn’t always the case...

...The National Academies of Science noted last year that there has been a tenfold increase since 1975 in scientific papers retracted because of fraud. A popular scientific blog, Retraction Watch, reports daily on retractions, corrections, and fraud from all corners of the scientific world.

Some argue that such findings aren’t evidence that science is broken — just very difficult. News “explainer” Vox recently defended the process, calling science “a long and grinding process carried out by fallible humans, involving false starts, dead ends, and, along the way, incorrect and unimportant studies that only grope at the truth, slowly and incrementally.”

Of course, finding and correcting errors is a normal and expected part of the scientific process. But there is more going on.

A recent article in Proceedings of the National Academy of Sciences documented that the problem in biomedical and life sciences is more attributable to bad actors than human error. Its authors conducted a detailed review of all 2,047 retracted research articles in those fields, which revealed that only 21.3 percent of retractions were attributable to error. In contrast, 67.4 percent of retractions were attributable to misconduct, including fraud or suspected fraud (43.4 percent), duplicate publication (14.2 percent), and plagiarism (9.8 percent).

Even this article on FiveThirtyEight, which attempts to defend the current scientific community from its critics, admits, “bad incentives are blocking good science.”

Polanyi doesn’t take these bad incentives into account—and perhaps they weren’t as pronounced in 1960s England as they are in the modern United States. In his article, he assumes that professional standards are enough to ensure that contributions to the scientific discussion would be plausible, accurate, important, interesting, and original. He fails to mention the strong incentives, produced by the tenure process, to publish in journals of particular prestige and importance...
 
The Tacit Magical Thinking in Popular Science

Systematic historical explorations of preconditions and wider contexts of scientific practice have fundamentally challenged such traditional accounts, particularly since historical scholarship has ceased to be dominated by exercises in promoting and justifying scientific and medical professionalism. In fact, popular science magazines and pamphlets co-emerged, and often overlapped content-wise, with a modern standard historiography of science, which retroactively transformed past events and actors to fit dominant nineteenth- and twentieth-century sensibilities. A major problem with present-day popular science is that it continues outdated history of science narratives to make the past compatible with contemporary academic mainstream culture. It insists to be naturalistic, and yet it adheres to breathtakingly simplistic and ultimately teleological nineteenth-century science myths and rhetorical patterns, for the only organising principles of scientific and medical practice that appear to exist for popular science are ‘reason’ and ‘truth’.
 
Who Will Debunk The Debunkers?

In the last few years, Sutton has himself embarked on another journey to the depths, this one far more treacherous than the ones he’s made before. The stakes were low when he was hunting something trivial, the supermyth of Popeye’s spinach; now Sutton has been digging in more sacred ground: the legacy of the great scientific hero and champion of the skeptics, Charles Darwin. In 2014, after spending a year working 18-hour days, seven days a week, Sutton published his most extensive work to date, a 600-page broadside on a cherished story of discovery. He called it “Nullius in Verba: Darwin’s Greatest Secret.”

Sutton’s allegations are explosive. He claims to have found irrefutable proof that neither Darwin nor Alfred Russel Wallace deserves the credit for the theory of natural selection, but rather that they stole the idea — consciously or not — from a wealthy Scotsman and forest-management expert named Patrick Matthew. “I think both Darwin and Wallace were at the very least sloppy,” he told me. Elsewhere he’s been somewhat less diplomatic: “In my opinion Charles Darwin committed the greatest known science fraud in history by plagiarizing Matthew’s” hypothesis, he told the Telegraph. “Let’s face the painful facts,” Sutton also wrote. “Darwin was a liar. Plain and simple.”

But if his paper on the spinach myth convinced everyone who read it — even winning an apology from Terence Hamblin, one of the myth’s major sources — the work on Darwin barely registered. Many scholars ignored it altogether. A few, such as Michael Weale of King’s College, simply found it unconvincing. Weale, who has written his own book on Patrick Matthew, argued that Sutton’s evidence was somewhat weak and circumstantial. “There is no ‘smoking gun’ here,” he wrote, pointing out that at one point even Matthew admitted that he’d done little to spread his theory of natural selection. “For more than thirty years,” Matthew wrote in 1862, he “never, either by the press or in private conversation, alluded to the original ideas …knowing that the age was not suited for such.”

Yet despite all this complicating evidence, scholars still tell the simple version of the Semmelweis story and use it as an example of how other people — never them, of course — tend to reject information that conflicts with their beliefs. That is to say, the scholars reject conflicting information about Semmelweis, evincing the Semmelweis reflex, even as they tell the story of that reflex. It’s a classic supermyth!

And so it goes, a whirligig of irony spinning around and around, down into the depths. Is there any way to escape this endless, maddening recursion? How might a skeptic keep his sanity? I had to know what Sutton thought. “I think the solution is to stay out of rabbit holes,” he told me. Then he added, “Which is not particularly helpful advice.”
 
Back
Top