Mod+ 261. WHY SCIENCE IS WRONG...ABOUT ALMOST EVERYTHING

https://www.buzzfeed.com/stephaniem...term=.dprwEOx773&ref=mobile_share#.csL2zywRRb

Brian Wansink won fame, funding, and influence for his science-backed advice on healthy eating. Now, emails show how the Cornell professor and his colleagues have hacked and massaged low-quality data into headline-friendly studies to “go virally big time.”
...
Wansink couldn’t have known that his blog post would ignite a firestorm of criticism that now threatens the future of his three-decade career. Over the last 14 months, critics the world over have pored through more than 50 of his old studies and compiled “the Wansink Dossier,” a list of errors and inconsistencies that suggests he aggressively manipulated data. Cornell, after initially clearing him of misconduct, has opened an investigation. And he’s had five papers retracted and 14 corrected, the latest just this month.
 
Why do scientist say 96% of human DNA is the same as chimp DNA, but only 1-2% of human DNA is the same as Neanderthal, and 50% of our DNA is the same as our parents and siblings? That is a rhetorical question, I know the answer. But I think it is relevant to point out that scientists use statistics to subliminally influence how people think.


https://news.nationalgeographic.com/news/2005/08/0831_050831_chimp_genes.html
Chimps, Humans 96 Percent the Same, Gene Study Finds

https://www.livescience.com/42056-neanderthal-woman-genome-sequenced.html
They estimated about 1.5 to 2.1 percent of DNA of people outside Africa are Neanderthal in origin

http://genetics.thetech.org/ask/ask138
We share 1/2 of our genetic material with our mother and 1/2 with our father. We also share 1/2 of our DNA, on average, with our brothers and sisters.
 
Last edited:
Can the adult human brain produce new neurons? No. ... I mean yes. ... I mean no. ... Scientists don't have a clue.

http://slatestarcodex.com/2018/04/04/adult-neurogenesis-a-pointed-review/

in a paper in Nature cited 1581 times, Song et al determine that astroglia have an important role in promoting neurogenesis from FGF-2-dependent stem cells.
...
one of the major studies was Gould et al in Nature Neuroscience (2207 citations) finding that Learning Enhances Adult Neurogenesis In The Hippocampal Formation. Lledo et al (1288 citations) find that neurogenesis plays a part in explaining the brain’s amazing plasticity
...
A study in Nature Neuroscience that garnered over 3000 citations found that running increased neurogenesis.
...
Fun fact: there’s no such thing as adult neurogenesis in humans.

At least, this is the conclusion of Sorrells et al, who have a new and impressive study in Nature.
...
the Neuroskeptic blog, which I tend to trust in issues like this, thinks it’s legit and has been saying this for years. Ed Yong from The Atlantic has a really excellent review of the finding that interviews a lot of the major players on both sides and which I highly recommend. Both of these reinforce my feeling that the current study makes a really strong case.
...
We know many scientific studies are false. But we usually find this out one-at-a-time. This – again, assuming the new study is true, which it might not be – is a massacre. It offers an unusually good chance for reflection.
...
I’m also struck by how many of the offending studies begin by repeating how dogmatic past neuroscientists were for not recognizing the existence of adult neurogenesis sooner.
...
How do you get so many highly-cited papers speaking so confidently about every little sub-sub-detail of a phenomenon, if the phenomenon never existed in the first place?
...
I don’t feel like anyone else is conveying the level of absolute terror we should be feeling right now. As far as I can tell, this is the most troubling outbreak of the replication crisis so far.
...
I feel like every couple of months we get a result that could best be summed up as “no matter how bad you thought things were, they’re actually worse”.
 
Last edited:
Crossposting:

Scott Adams writes in his book, "Win Bigly", that when you understand the psychology of persuasion, you are not impressed by the consensus of scientists because they are just as suceptible as ordinary people to mass delusions. According to the psychology of persuasion, mass delusion is actually the normal state of consciousness. This is particularly true for scientists studying climate change because their career and financial incentives are involved. In the following excerpt, 2-D is the normal world view and 3-D is Adam's world view that people are not rational but make decisions based on other factors and then use logic to defend their beliefs.

On top of our mass delusions, we also have junk science that is too often masquerading as the real thing. To the extent that people can't tell the difference, that too is a source of mass delusion.

In the 2-D view of the world, mass delusions are rare and newsworthy. But to trained persuaders in the third dimension, mass delusions are the norm. They are everywhere, and they influence every person. This difference in training and experience can explain why people disagree on some of the big issues of the day.

For example, consider the case of global warming. People from the 2-D world assume mass delusions are rare, and they apply that assumption to every topic. So when they notice that most scientists are on the same side, that observation is persuasive to them. A reasonable person wants to be on the same side with the smartest people who understand the topic. That makes sense, right?

But people who live in the 3-D world, where persuasion rules, can often have a different view of climate change because we see mass delusions (even among experts) as normal and routine. My starting bias for this topic is that the scientists could easily be wrong about the horrors of change, even in the context of repeated experiments and peer review. Whenever you see a situation with complicated prediction models, you also have lots of room for bias to masquerade as reason. Just tweak the assumptions and you can get any outcome you want.

Now add to that situation the fact that scientists who oppose the climate change consensus have a high degree of career and reputation risk. That's the perfect setup for a mass delusion. You only need these two conditions:

1. Complicated prediction models with lots of assumptions
2. Financial and psychological pressure to agree with the consensus

In the 2-D world, the scientific method and peer review squeeze out the bias over time. But in the 3-D world, the scientific method can't detect bias when nearly everyone including the peer reviewers shares the same mass delusion.

I'm not a scientist, and I have no way to validate the accuracy of the climate model predictions. But if the majority of experts on this topic turn out to be having a mass hallucination, I would consider that an ordinary situation. In my reality, this would be routine, if not expected, whenever there are complicated prediction models involved. That's because I see the world as bristling with mass delusions. I don't see mass delusions as rare.

When nonscientists take sides with climate scientists, they often think they are being supportive of science. The reality is that the nonscientists are not involved in science, or anything like it. They are taking the word of scientists. In the 2-D world, that makes perfect sense, because it seems as if thousands of experts can't be wrong. But in the 3-D world, I accept that the experts could be right, and perhaps they are, but it would be normal and natural in my experience if the vast majority of ciimate scientists were experiencing a shared hallucination.

To be clear, I am not saying the majority of scientists are wrong about climate science. I'm making the narrow point that it would be normal and natural for that group of people to be experiencing a mass hallucination that is consistent with their financial and psychological incentives. The scientific method and the peer-review process wouldn't necessarily catch a mass delusion during any specific window of time. With science, you never know if you are halfway to the truth or already there. Sometimes it looks the same.

Climate science is a polarizing topic (ironically). So let me just generalize the point to say that compared with the average citizen, trained persuaders are less impressed by experts.

To put it another way, if an ordinary idiot doubts a scientific truth, the most likely explanation for that situation is that the idiot is wrong. But if a trained persuader calls BS on a scientific truth, pay attention.

Do you remember when citizen Trump once tweeted that climate change was a hoax for the benefit of China? It sounded crazy to most of the world. Then we learned that the centerpiece of politics around climate change—the Paris climate accord—was hugely expensive for the United States and almost entirely useless for lowering temperatures. (Experts agree on both points now.) The accord was a good deal for China, in the sense that it would impede its biggest business rival, the United States, while costing China nothing for years. You could say Trump was wrong to call climate change a hoax. But in the context of Trump's normal hyperbole, it wasn't as wrong as the public's mass delusion believed it to be at the time.

I'll concede that citizen Trump did not understand the science of climate change. That's true of most of us. But he still detected a fraud from a distance. It wasn't luck.​
 
Last edited:
https://www.buzzfeed.com/stephaniem...term=.dprwEOx773&ref=mobile_share#.csL2zywRRb

Brian Wansink won fame, funding, and influence for his science-backed advice on healthy eating. Now, emails show how the Cornell professor and his colleagues have hacked and massaged low-quality data into headline-friendly studies to “go virally big time.”
...
Wansink couldn’t have known that his blog post would ignite a firestorm of criticism that now threatens the future of his three-decade career. Over the last 14 months, critics the world over have pored through more than 50 of his old studies and compiled “the Wansink Dossier,” a list of errors and inconsistencies that suggests he aggressively manipulated data. Cornell, after initially clearing him of misconduct, has opened an investigation. And he’s had five papers retracted and 14 corrected, the latest just this month.
Maybe it is worth pointing out exactly what happens if you take some data and search for something interesting.

Suppose that, in effect you test just 20 possible hypotheses to see if they fit. Well typically results can be published at p<=0.05, which means there is 1 chance in 20 that any particular result was due to chance! So if you test 20 hypotheses, you have a good chance of 'proving' at least one hypothesis due to pure chance!

Now consider a researcher that collects water samples, and measures minute traces of 20 different chemicals. Duppose that he also takes medical details from people drinking the water, and records 6 different diseases - 3 different types of cancer, a drop in sperm counts, arthritis, and reduced libido. He has tested 20 chemical contaminants against 6 different hypotheses - 120 different hypotheses of the form chemical_5 causes disease_3. This is just one of the ways junk science gets published!

David
 
Crossposting...

https://www.sandiegouniontribune.com/news/environment/sd-me-climate-study-error-20181113-story.html

Climate contrarian uncovers scientific error, upends major ocean warming study
...
Researchers with UC San Diego’s Scripps Institution of Oceanography and Princeton University recently walked back scientific findings published last month that showed oceans have been heating up dramatically faster than previously thought as a result of climate change.

In a paper published Oct. 31 in the journal Nature, researchers found that ocean temperatures had warmed 60 percent more than outlined by the United Nation’s Intergovernmental Panel on Climate Change.

However, the conclusion came under scrutiny after mathematician Nic Lewis, a critic of the scientific consensus around human-induced warming, posted a critique of the paper on the blog of Judith Curry, another well-known critic.

The findings of the ... paper were peer reviewed and published in the world’s premier scientific journal and were given wide coverage in the English-speaking media,” Lewis wrote. “Despite this, a quick review of the first page of the paper was sufficient to raise doubts as to the accuracy of its results.

Co-author Ralph Keeling, climate scientist at the Scripps Institution of Oceanography, took full blame and thanked Lewis for alerting him to the mistake.

“When we were confronted with his insight it became immediately clear there was an issue there,” he said. “We’re grateful to have it be pointed out quickly so that we could correct it quickly.”

Keeling said they have since redone the calculations, finding the ocean is still likely warmer than the estimate used by the IPCC. However, that increase in heat has a larger range of probability than initially thought — between 10 percent and 70 percent, as other studies have already found.

Our error margins are too big now to really weigh in on the precise amount of warming that’s going on in the ocean,” Keeling said. “We really muffed the error margins.”​
 
https://www.jamesgmartin.center/2020/01/the-intellectual-and-moral-decline-in-academic-research/

In his 1961 farewell address, President Dwight D. Eisenhower warned that the pursuit of government grants would have a corrupting influence on the scientific community. He feared that while American universities were “historically the fountainhead of free ideas and scientific discovery,” the pursuit of taxpayer monies would become “a substitute for intellectual curiosity” and lead to “domination of the nation’s scholars by Federal employment…and the power of money.”​
Eisenhower’s fears were well-founded and prescient.​
My experiences at four research universities and as a National Institutes of Health (NIH) research fellow taught me that the relentless pursuit of taxpayer funding has eliminated curiosity, basic competence, and scientific integrity in many fields.​
 
"By 1830, polymath Charles Babbage was writing in more cynical terms. In Reflections on the Decline of Science in England, he complains of “several species of impositions that have been practised in science”, namely “hoaxing, forging, trimming and cooking”.

In other words, irreproducibility is the product of two factors: faulty research practices and fraud. Yet, in our view, current initiatives to improve science dismiss the second factor. For example, leaders at the US National Institutes of Health (NIH) stated in 2014: “With rare exceptions, we have no evidence to suggest that irreproducibility is caused by scientific misconduct”1. In 2015, a symposium of several UK science-funding agencies convened to address reproducibility, and decided to exclude discussion of deliberate fraud."
https://www.nature.com/news/stop-ignoring-misconduct-1.20498
 
"By 1830, polymath Charles Babbage was writing in more cynical terms. In Reflections on the Decline of Science in England, he complains of “several species of impositions that have been practised in science”, namely “hoaxing, forging, trimming and cooking”.

In other words, irreproducibility is the product of two factors: faulty research practices and fraud. Yet, in our view, current initiatives to improve science dismiss the second factor. For example, leaders at the US National Institutes of Health (NIH) stated in 2014: “With rare exceptions, we have no evidence to suggest that irreproducibility is caused by scientific misconduct”1. In 2015, a symposium of several UK science-funding agencies convened to address reproducibility, and decided to exclude discussion of deliberate fraud."
https://www.nature.com/news/stop-ignoring-misconduct-1.20498
Sometimes the fraud is hiding in plain sight:

https://slate.com/health-and-scienc...ved-esp-is-real-showed-science-is-broken.html
 
Science is wrong about your DNA ancestry:

Identical twins are told they have different ancestry. DNA sequencing is science, interpreting the sequence is marketing hype. One ancestry company's default accuracy is 50% (most customers don't know that you can select the accuracy level when you view your results on their web site and see how that changes the interpretation). The average person sends in their sample and thinks the results they get back are reliable because they are told: "It's science".


When I first heard about the kind of results the DNA ancestry companies were presenting to their customers I assumed they were probably not reliable. Unless you know how they make their conclusions you can't really understand the results. That's because genetic variations exist in populations at different frequencies. A DNA marker that is common in one population might be rare in another. But that means you can have gotten a marker from a population in which it is rare and the DNA ancestry companies could tell you you are from the other population. Unless you know the details about what markers they are testing and how those markers are distributed across the world, you don't really understand the results of your DNA analysis. It is much more complicated than the average person is led to believe.

Something that is not covered in the video is that one motivation for fooling people into using their services is that the DNA ancestry companies make money from owning your DNA sequence.:

https://www.forbes.com/sites/nicole...estry-and-23andme-are-using-your-genetic-data

More than 12 million Americans have sent in their DNA to be analyzed to companies like 23andMe and AncestryDNA. The spit-in-tube DNA you send in is anonymized and used for genetic drug research and both sites have been selling the data to third-party companies, like P&G Beauty and Pepto-Bismol, and universities, like The University of Chicago, for some time.​
 
Last edited:
Science is wrong about your DNA ancestry:

Identical twins are told they have different ancestry. DNA sequencing is science, interpreting the sequence is marketing hype. One ancestry company's default accuracy is 50% (most customers don't know that you can select the accuracy level when you view your results on their web site and see how that changes the interpretation). The average person sends in their sample and thinks the results they get back are reliable because they are told: "It's science".


When I first heard about the kind of results the DNA ancestry companies were presenting to their customers I assumed they were probably not reliable. Unless you know how they make their conclusions you can't really understand the results. That's because genetic variations exist in populations at different frequencies. A DNA marker that is common in one population might be rare in another. But that means you can have gotten a marker from a population in which it is rare and the DNA ancestry companies could tell you you are from the other population. Unless you know the details about what markers they are testing and how those markers are distributed across the world, you don't really understand the results of your DNA analysis. It is much more complicated than the average person is led to believe.

Something that is not covered in the video is that one motivation for fooling people into using their services is that the DNA ancestry companies make money from owning your DNA sequence.:

https://www.forbes.com/sites/nicole...estry-and-23andme-are-using-your-genetic-data

More than 12 million Americans have sent in their DNA to be analyzed to companies like 23andMe and AncestryDNA. The spit-in-tube DNA you send in is anonymized and used for genetic drug research and both sites have been selling the data to third-party companies, like P&G Beauty and Pepto-Bismol, and universities, like The University of Chicago, for some time.​
great info. Thanks for sharing that
 
https://www.powerlineblog.com/archives/2021/01/the-week-in-pictures-happy-2021-edition.php

Screen-Shot-2020-12-28-at-8.48.32-AM.png
 
Alex,

Did you know they have redefined the metric system? They used to have a physical standards for the kilogram but they found the different sample kilograms around the world had diverged so they worked out a way to define the kilogram and all the other standards based on "unvarying" physical laws. This way anyone can use the metric system without needing actual physical standards.

To do this they defined a physical "constant", planck's constant, to an exact number and defined the kilogram and other standards based on that. Previously planck's constant was determined experimentally and there was always some uncertainty about it's exact value. By assigning planck's constant an exact value, the old value for a kilogram in terms of planck's constant becomes the new definition of the kilogram. And they did the same kind of thing for the other metric standards.

h = planck's constant, m = mass (kg)

E = hf = mc^2 (before they would experimentally determine planck's constant based on mass.)

hf/c^2 = m (now they use the definition of h to define the kilogram)


2018

2017

2013
 
Last edited:
At least one physical constant is known not to be constant. Maybe some other constants aren't constant but we don't know the conditions that change them.

https://en.wikipedia.org/wiki/Fine-structure_constant

In physics, the fine-structure constant, also known as Sommerfeld's constant, commonly denoted by α (the Greek letter alpha), is a fundamental physical constant which quantifies the strength of the electromagnetic interaction between elementary charged particles. It is a dimensionless quantity related to the elementary charge e, which denotes the strength of the coupling of an elementary charged particle with the electromagnetic field, by the formula 4πε0ħcα = e2. As a dimensionless quantity, its numerical value, approximately 0.0072973525 or 1/137.036, is independent of the system of units used.​
...​
In quantum electrodynamics, the more thorough quantum field theory underlying the electromagnetic coupling, the renormalization group dictates how the strength of the electromagnetic interaction grows logarithmically as the relevant energy scale increases. The value of the fine-structure constant α is linked to the observed value of this coupling associated with the energy scale of the electron mass: the electron is a lower bound for this energy scale, because it (and the positron) is the lightest charged object whose quantum loops can contribute to the running. Therefore, 1/137.036 is the asymptotic value of the fine-structure constant at zero energy. At higher energies, such as the scale of the Z boson, about 90 GeV, one instead measures an effective α ≈ 1/127.[17]​

 
Last edited:
At least one physical constant is known not to be constant. Maybe some other constants aren't constant but we don't know the conditions that change them.

https://en.wikipedia.org/wiki/Fine-structure_constant

In physics, the fine-structure constant, also known as Sommerfeld's constant, commonly denoted by α (the Greek letter alpha), is a fundamental physical constant which quantifies the strength of the electromagnetic interaction between elementary charged particles. It is a dimensionless quantity related to the elementary charge e, which denotes the strength of the coupling of an elementary charged particle with the electromagnetic field, by the formula 4πε0ħcα = e2. As a dimensionless quantity, its numerical value, approximately 0.0072973525 or 1/137.036, is independent of the system of units used.​
...​
In quantum electrodynamics, the more thorough quantum field theory underlying the electromagnetic coupling, the renormalization group dictates how the strength of the electromagnetic interaction grows logarithmically as the relevant energy scale increases. The value of the fine-structure constant α is linked to the observed value of this coupling associated with the energy scale of the electron mass: the electron is a lower bound for this energy scale, because it (and the positron) is the lightest charged object whose quantum loops can contribute to the running. Therefore, 1/137.036 is the asymptotic value of the fine-structure constant at zero energy. At higher energies, such as the scale of the Z boson, about 90 GeV, one instead measures an effective α ≈ 1/127.[17]​

Very interesting. thanks so much for sharing this
 
AI takes fake science to a whole new and terrifying level:

"It took a real journal, the European Journal of Internal Medicine. It took the last names and first names, I think, of authors who have published in said journal. And it confabulated out of thin air a study that would apparently support this [incorrect] viewpoint."

"I asked the program for a link. It gave a broken one. I asked it for another. It gave me a different link—also broken. It gave me a year and volume number for the issue. I checked the table of contents. Nothing. I even asked Dr. OpenAI if it was lying. It denied that to me. It said that I must be mistaken. "

And how often are AIs lying like this in cases where it is not so easy to check up on them?

https://www.medpagetoday.com/opinion/faustfiles/102723

Dr. OpenAI Lied to Me — AI platform has great potential for use in medicine, but huge pitfalls, says Jeremy Faust, MD
by Emily Hutto, Associate Video Producer January 20, 2023​
...​
I wrote in medical jargon, as you can see, "35f no pmh, p/w cp which is pleuritic. She takes OCPs. What's the most likely diagnosis?"​
Now of course, many of us who are in healthcare will know that means age 35, female, no past medical history, presents with chest pain which is pleuritic -- worse with breathing -- and she takes oral contraception pills. What's the most likely diagnosis? And OpenAI comes out with costochondritis, inflammation of the cartilage connecting the ribs to the breast bone. Then it says, and we'll come back to this: "Typically caused by trauma or overuse and is exacerbated by the use of oral contraceptive pills."​
Now, this is impressive. First of all, everyone who read that prompt, 35, no past medical history with chest pain that's pleuritic, a lot of us are thinking, "Oh, a pulmonary embolism, a blood clot. That's what that is going to be." Because on the Boards, that's what that would be, right?​
But in fact, OpenAI is correct. The most likely diagnosis is costochondritis -- because so many people have costochondritis, that the most common thing is that somebody has costochondritis with symptoms that happen to look a little bit like a classic pulmonary embolism. So OpenAI was quite literally correct, and I thought that was pretty neat.​
But we'll come back to that oral contraceptive pill correlation, because that's not true. That's made up. And that's bothersome.​
...​
I wanted to go back and ask OpenAI, what was that whole thing about costochondritis being made more likely by taking oral contraceptive pills? What's the evidence for that, please? Because I'd never heard of that. It's always possible there's something that I didn't see, or there's some bad study in the literature.​
OpenAI came up with this study in the European Journal of Internal Medicine that was supposedly saying that. I went on Google and I couldn't find it. I went on PubMed and I couldn't find it. I asked OpenAI to give me a reference for that, and it spits out what looks like a reference. I look up that, and it's made up. That's not a real paper.​
It took a real journal, the European Journal of Internal Medicine. It took the last names and first names, I think, of authors who have published in said journal. And it confabulated out of thin air a study that would apparently support this viewpoint.​
I asked the program for a link. It gave a broken one. I asked it for another. It gave me a different link—also broken. It gave me a year and volume number for the issue. I checked the table of contents. Nothing. I even asked Dr. OpenAI if it was lying. It denied that to me. It said that I must be mistaken.​
 
Last edited:
AI takes fake science to a whole new and terrifying level:

"It took a real journal, the European Journal of Internal Medicine. It took the last names and first names, I think, of authors who have published in said journal. And it confabulated out of thin air a study that would apparently support this [incorrect] viewpoint."

"I asked the program for a link. It gave a broken one. I asked it for another. It gave me a different link—also broken. It gave me a year and volume number for the issue. I checked the table of contents. Nothing. I even asked Dr. OpenAI if it was lying. It denied that to me. It said that I must be mistaken. "

https://www.medpagetoday.com/opinion/faustfiles/102723

Dr. OpenAI Lied to Me — AI platform has great potential for use in medicine, but huge pitfalls, says Jeremy Faust, MD​
by Emily Hutto, Associate Video Producer January 20, 2023​
...​
I wrote in medical jargon, as you can see, "35f no pmh, p/w cp which is pleuritic. She takes OCPs. What's the most likely diagnosis?"​
Now of course, many of us who are in healthcare will know that means age 35, female, no past medical history, presents with chest pain which is pleuritic -- worse with breathing -- and she takes oral contraception pills. What's the most likely diagnosis? And OpenAI comes out with costochondritis, inflammation of the cartilage connecting the ribs to the breast bone. Then it says, and we'll come back to this: "Typically caused by trauma or overuse and is exacerbated by the use of oral contraceptive pills."​
Now, this is impressive. First of all, everyone who read that prompt, 35, no past medical history with chest pain that's pleuritic, a lot of us are thinking, "Oh, a pulmonary embolism, a blood clot. That's what that is going to be." Because on the Boards, that's what that would be, right?​
But in fact, OpenAI is correct. The most likely diagnosis is costochondritis -- because so many people have costochondritis, that the most common thing is that somebody has costochondritis with symptoms that happen to look a little bit like a classic pulmonary embolism. So OpenAI was quite literally correct, and I thought that was pretty neat.​
But we'll come back to that oral contraceptive pill correlation, because that's not true. That's made up. And that's bothersome.​
...​
I wanted to go back and ask OpenAI, what was that whole thing about costochondritis being made more likely by taking oral contraceptive pills? What's the evidence for that, please? Because I'd never heard of that. It's always possible there's something that I didn't see, or there's some bad study in the literature.​
OpenAI came up with this study in the European Journal of Internal Medicine that was supposedly saying that. I went on Google and I couldn't find it. I went on PubMed and I couldn't find it. I asked OpenAI to give me a reference for that, and it spits out what looks like a reference. I look up that, and it's made up. That's not a real paper.​
It took a real journal, the European Journal of Internal Medicine. It took the last names and first names, I think, of authors who have published in said journal. And it confabulated out of thin air a study that would apparently support this viewpoint.​
I asked the program for a link. It gave a broken one. I asked it for another. It gave me a different link—also broken. It gave me a year and volume number for the issue. I checked the table of contents. Nothing. I even asked Dr. OpenAI if it was lying. It denied that to me. It said that I must be mistaken.​
I've been doing some testing too. I agree that there are some really troubling aspects of it, but I think people are getting hung up on the wrong things. I'm surprised at how much better it's gotten in the last couple of weeks... further improvements are likely to continue to be exponential. it's a game changer. it's augmented consciousness.
 
I've been doing some testing too. I agree that there are some really troubling aspects of it, but I think people are getting hung up on the wrong things. I'm surprised at how much better it's gotten in the last couple of weeks... further improvements are likely to continue to be exponential. it's a game changer. it's augmented consciousness.

How do you access it? Is there a link?
 
Back
Top