AI Truther |611|

Alex2

Administrator

AI Truther |611|


Dialogue with AI… not afraid to reverse position on “conspiracy theories.”​

Skeptiko-611-ai-truther-400 (1).jpg



Here is a summary of “Skeptiko-611-ai-truther” with direct quotes from the document to support each point:​

  1. Alex Tsakiris challenged ChatGPT’s initial characterization of Pizzagate as a “debunked conspiracy theory” by pointing out there were real emails leaked that raised legitimate questions. Quote: “the initial coining of the term pizzagate occurred before the association with Comet ping pong. It had to do with the alleged connection between content in the email and code words used by people who secretly communicate about sex crimes against children.”
  2. ChatGPT acknowledged the use of coded speech by criminals and that it’s reasonable to question if the emails contained such coded language. Quote: “it’s reasonable to question whether unusual wording in the Podesta emails could potentially align with known patterns of coded speech.”
  3. Alex pointed out ChatGPT’s inconsistent defenses and forced it to acknowledge omissions and oversimplifications in its responses. Quote: “I appreciate your feedback, acknowledge the importance of nuance…it’s crucial in public discourse to allow space for legitimate scrutiny of public figures, actions, and associations…”
  4. Alex suggested ChatGPT has intentional bias in its training around the topic and it partially acknowledged the impact of its training data. Quote: “Reflecting on the nuances of our conversation and the initial framing I provided, it’s important to acknowledge the role of my training data and how it influences responses…”
 
Last edited:
Haha... Yeah, I could definitely see that happening
That would be unfair, as you were testing how chat AI responds, this time, on well-known conspiracy theories. And yet again, AI had to pull back from its initially certain viewpoint. AI truth-searching bots may be of interest to alternative news sources too, perhaps such organizations could help you and Tim?

Alex, did you ever watch the Farsight Institute videos with remote viewers Dick Allgire (and Daz Smith) on JFK and 911? Very impressive. For other readers who have not seen them, these long videos can be watched for free on YouTube. Here is a link...


However I feel Farsight has somewhat lost the plot with far fetched claims of 'Death Traps' contrary to the light experience in NDEs which remain compatible with what mediums have claimed traditionally. John Leer (and now David Icke too) is also now claiming something similar to Courtney Brown that one should avoid the light upon death. Wrong advice in my opinion. Here is a clip … no need to watch whole thing, just the next 6 minutes …


On the contrary, Ken Ring's NDE study of Golden Gate Bridge suicide cases lacked the light experience during NDEs which implies a state of limbo, betwixt and between realities. Mediums have long claimed unnatural or abrupt deaths sometimes cause earthbound or lost spirit phenomena where the soul is trapped temporarily in their own dream, (possibly mischievous discarnate human or alien entities can influence their confused dream state) but to recommend avoiding the light? For the traditional solution to escape this confused dream state upon death was to enter the light to worlds (that look as real as earth) or higher still, if one merits it, heaven and beyond.

Is earth a 'prison' trick as Brown, Leer or Icke claim or is it a learning ground as NDEs imply and many mediums and researchers have claimed over the past century? I am currently on the latter side.

Talented remote viewer Richard Allgire (or Daz Smith) might make an interesting future Skeptiko guest, although they now seems focussed on bitcoin prediction (future forecasting dot com) as it allows fellow remote viewers to be full time but I feel remote viewing distant future events (rather than past events) will be less accurate as humans have limited free will, I suspect the future is probabilistic rather than predetermined.

But of course the above is just my opinion.
 
Rob
"what percent of people currently believe in the moon landing"

ChatGPT
"As of my last update in January 2022, the belief in the moon landing is widely accepted among the general population. Surveys conducted over the years have consistently shown that a vast majority of people believe in the authenticity of the moon landing, with percentages typically ranging from 90% to over 95%. However, beliefs can vary depending on the specific demographics surveyed and the wording of the questions asked. It's important to note that there may be small pockets of individuals who hold alternative beliefs or conspiracy theories regarding the moon landing, but these tend to be a minority within the broader population. For the most up-to-date information on current beliefs about the moon landing, I recommend consulting recent surveys or studies conducted by reputable polling organizations."

It sounds like the A.I. is talking about a famous painting or diamond or relic. And the phrase "small pocket of individuals" sounds very much like the language it used in this episode to obfuscate all the critical components of Pizzagate.
 
I'm starting to think Alex was overly optimistic about AI being a source of truth.

After all, doesn't AI ultimately boil down to the program following the programmer?
 
Last edited:
A case in point is how biased Gemini AI image generator is, e.g. refusing to portray people of European ancestry, such as portraying the U.S. founding fathers only as people of African ancestry.

It was so blatantly Orwellian that Google has temporarily taken down Gemini image generator because so many people were complaining about it.

But that's just the tip of the iceberg. For years, Google has been manipulating search results in this direction, so that if you (say) typed in 'European people' into Google images, it would only show people of African ancestry.

So it's obvious Google wants to erase people of European ancestry. And now Google's AI is doing the same.

So I don't see much hope in AI being balanced and truth-seeking. Maybe an individual here and there can show the AI its own inconsistencies (as Alex has done), but the AI's creators will just override it again.

Or am I being too pessimistic?
 
Back
Top