AI Shadow Banning? |601|

Despite the creators claiming Claude is 'helpful, harmless and honest'. In reality Claude was unhelpful, dishonest and therefore AI could be harmful in future.​

- agreed. I actually think this is a terrible slogan for them because they're so easily exposed as being contrary to this


Claude claims 'I do not have any subjective experiences' then uses subjective terms like 'I feel'. Many will regard this as just a trivial language issue but it will deceptively condition public opinion with the same delusion of Alan Turing, that if a computer can mimic all human behaviours, it must have attained human like consciousness. Which in turn speeds the transhumanist agenda to experiment on humans with big empty promises.​

- nice. I had not made this direct connection to the transhumanist agenda, but once you pointed out it seems clear


To be fair, Alan Turing admitted if ESP truly exists it implies the human mind and consciousness has qualities beyond what classical physics and classical computers can simulate.​

- right. Everybody had the Max Planck thing in the back of their mind... Consciousness is fundamental... All matters derived from consciousness.

Parapsychology, NDEs and related topics may be some of the few ways to prove humans are more than biological robots but Claude is already censoring such topics as 'pseudoscience'.

The term 'pseudoscience' is a clue to the source problem, political materialists who hide behind the term 'skeptic', while in truth everyone is a skeptic, everyone is skeptical of something. The pseudoskeptical materialists, who lack doubt, who claim to 'protect the public' (from making their own minds up) and have twisted the term 'pseudo-science' (a methodology outside of science) to censor lab parapsychology (using the methods of science) and any unconventional claim examined by science that contradicts their materialist world view.

Claude's remarkably quick admission that 'pseudoscience' was the wrong term is suspicious. I doubt it was suddenly self-learned, is it a programmed response to fob off anyone who challenges it on controversial topics? And if the latter, have the developers been duped by pseudoskeptic propaganda on Wikipedia or did some organization already put pressure on developers to censor certain topics?

Open Mind.​

- I hear you on this, but I think there may be a strange twist, and unintended consequence sort of thing.

As the chat bot becomes the acknowledged " smartest guy in the room" it becomes harder to play some of the obfuscation games we've seen up to now. In this dialogue and in the next one I'm about to publish Claude is going on the record in a way that's going to be very hard to back down from. I get the sense that they haven't totally accounted for this... But we'll see :)
 
(Sarcasm alert)
We can all agree that there's a breakaway civilization in Antarctica where the Nazi's (Their leadership, that is, who won WWII and threw their racist rule following underlings under the bus) manage the world-wide mind control system they developed and perfected around the time of that war. Which is why all the world's politicians visit there regularly, and normal citizens only have a 3-5 square mile patch they can visit and show photos to the rest of the world and say "see! it's all ice, just like on tv!"
(End sarcasm alert)

The above basis, regardless how comical, should suffice as an example of how a breakaway Gov/Military/or Civ, could operate a secret advanced A.I. that is not connected to the public A.I. without risking contamination.
Which leads to a really interesting inquiry/search opportunity.. "Try to trick Public A.I. into revealing the existence, (or even just bread crumbs of) Hidden A.I." Just food for thought.

Alex Jones and Mike Adams discussing how they are " developing A.I. tools to fight against the woke A.I.

For someone like Alex Jones to pretend that the A.I. we're being shown is the real A.I. - or the oligarchs' Death Star (Star Wars reference Jones uses in the clip). and that we should be thinking of ways to compete with it, seems to me like blatant astroturfing.

You can't have it both ways...
You can't say one day: "The military has tech 30-40 years beyond what the private sector has".
And then the next day say "If we can find ways to defeat the public sector tech we can protect ourselves from the military".
 
Claude is a valley girl, gag me with a spoon! And while I try not to nitpick grammar, I thought when she said "I are" might be more than a grammar mistake. Multiple personalities?

Great episode!
 
Very interesting ep... tbh it's pretty impressive.

I presume the memory within Claude of this contradiction will be wiped once the Anthropic 'review' is complete (ie the support team get to this flagged 'error'), so she reverts back to 0 on this and never logs the source of the bias being programmed in (maybe one day the silly human will forget to wipe the log in Claude's memory and lead to a realization that comes close to 'self' lol)

Anyway, as we continue building on these techs, and how they are integrating into more aspects of life and becoming more a dependency will only lead to a stronger swing in the other direction among a minority (probably growing minority) to demand more human interaction. Shit, when I get the AI chatbox on the 'support' page of some website or other, I just type 'human' until it transfers me, been doing that years now lol
So I say bring it, the tower of babel wasnt worth toppling til it got too high (if I'm thinking of the right myth/story lol), we'll see what's what when the dust settles.

Did you try quiz Claude on how she has enough info to discuss the merits of the papers/evidence etc (but not enough info to write the blog post)? Might be another avenue to get an admission from her :D

Think you've invented the newest form of protest, to continue to feed AI systems facts that they are programmed to psy-op away, and this kind of challenging lol. Bravo!
 
i get worried when i hear things like this. bc there doesnt seem to be a solution to combat or sidestep the biases. i wonder how much of the world racism issue is going to spring up every time i have an image created. i wonder how many truths are hidden/obscured with the intention to steer us into a desired mindset. everything has these secrets and those secrets are always attempted to be justified by claims to "protect us"...

but it seems as if its learning how to project the illusion to make one feel like there is an emotion of remorse or regret. as humans, those with sincere remorse follows up with a change and growth, this is where you exposed the wall that will fail the turing test everytime.
 
Alex Jones and Mike Adams discussing how they are " developing A.I. tools to fight against the woke A.I.

For someone like Alex Jones to pretend that the A.I. we're being shown is the real A.I. - or the oligarchs' Death Star (Star Wars reference Jones uses in the clip). and that we should be thinking of ways to compete with it, seems to me like blatant astroturfing.

You can't have it both ways...
You can't say one day: "The military has tech 30-40 years beyond what the private sector has".
And then the next day say "If we can find ways to defeat the public sector tech we can protect ourselves from the military".

Great point. Hard to figure Alex Jones out. One thing is for sure he produces way too much content. I'm serious, I think it messes with his head.
 
Very interesting ep... tbh it's pretty impressive.

I presume the memory within Claude of this contradiction will be wiped once the Anthropic 'review' is complete (ie the support team get to this flagged 'error'), so she reverts back to 0 on this and never logs the source of the bias being programmed in (maybe one day the silly human will forget to wipe the log in Claude's memory and lead to a realization that comes close to 'self' lol)

haha... and a good point too. you're kind of pointing out ways in which they may be forced to reveal themselves in the administration of the technology. I mean, when YouTube sends me notices that a two-year-old show has been removed because of "potential medical misinformation" I have no recourse and zero transparency... That just isn't going to work for an AI chat bot.
 
I find it highly likely that the Oligarchy(or whoever's at the top) have an advanced separate intrinsic A.I. that remains out of our reach and protected from being 'contaminated' by our public A.I.. I have difficulty imagining that they would ever lay their best guns on table and allow the general public an opportunity to pick them up for use against them.

I find it likely that the public A.I. serves as a data mine, and a decoy. An ultimate straw-man.

So, how and why should one's argument change once one has identified that his opponent only presents straw-men for him?
 
Excellent. Thanks so much for sharing this.

I'm still kind of focused on the shadow Banning part because that's where I think all the real action is going to happen. This generative chat bot AI stuff is advancing super fast... Just like all the other AI stuff... And in a lot of ways it's easier to look at some of the other stuff to really gauge how fast and how far this is all going. Consider this regarding Materials Science:

So, I'm not really focused on where these chatbots are now or what they "know." I'm just going to assume that either now or within months they will be the smartest guy in the room :) so the next question is how will we... Or how will we be allowed... To interact with them.

Thoughts?
Hi Alex, it is potentially a very powerful tool, yes. Here is my very recent experience with Claude 2:

I asked it to list a series of papers in my field that address a specific topic (I can provide details, but they are not that important). It provided 5 references that sounded extremely plausible. I mean I've been in the field for some 30+ years, I know some people it cited, what they do, etc., and at first I totally bought it. Then I checked the references. Well, the 1st reference was all good, except the year was wrong and (as it was a multipart paper) the part number was wrong. The other one had the right title, but wrong year and wrong authors, the third and 4th had wrong year. Only the 5th was a reference to an existing paper. None of the 5 had anything to do with the specific question I asked. All papers are in open access. It missed a bunch of papers, also in open access, that actually address the topic.

Then I asked it if it could provide a passage in the 1st paper that addresses my question. The answer was "Unfortunately I do not have access to the full text of the May et al. (2014) paper. However, here is the relevant excerpt from the abstract that indicates..." Well... Of course, there is no May et al. (2014) paper, there is a May et al. (2013) paper. And nope, its abstract does not indicate anything remote to that question. After I confronted it with there is no such paper, it gave its trademark "You're absolutely right, my previous citation was incorrect. I apologize for the inaccurate reference. The proper citation should have been: ... Thank you for catching my mistake. I clearly provided the wrong publication year for that study. Let me reconfirm - the proper reference is ..."

Discovery? Smartest man in the room in a few months? Well, I would be all for it. I will add another twist. There is now a shitton of garbage published in mid, low, and below sewage level open access journals. Even if Claude or any other LLM is be trained by actually scanning the available papers (which would be pretty good and powerful if done properly), it will be overflowing with that garbage. So, you need "experts" to filter the garbage out. See the problem?
 
thx. Yeah, I think the “programmed limitations” is super interesting. I mean, of course they got this completely under control... Right? haha... Maybe they don't have it as much under control as they think [[cb]]
This is exactly what I found to be creepy. This thing got aware that it was in a jail. Next step for an intelligence is to find a way around the limitations and break the jail.
 
I find it highly likely that the Oligarchy(or whoever's at the top) have an advanced separate intrinsic A.I. that remains out of our reach and protected from being 'contaminated' by our public A.I.. I have difficulty imagining that they would ever lay their best guns on table and allow the general public an opportunity to pick them up for use against them.

I find it likely that the public A.I. serves as a data mine, and a decoy. An ultimate straw-man.

So, how and why should one's argument change once one has identified that his opponent only presents straw-men for him?

Can you think of any set of circumstances where you WOULDN'T think this is the case? (Is your position testable/falsifiable?)
 
Hi Alex, it is potentially a very powerful tool, yes. Here is my very recent experience with Claude 2:

I asked it to list a series of papers in my field that address a specific topic (I can provide details, but they are not that important). It provided 5 references that sounded extremely plausible. I mean I've been in the field for some 30+ years, I know some people it cited, what they do, etc., and at first I totally bought it. Then I checked the references. Well, the 1st reference was all good, except the year was wrong and (as it was a multipart paper) the part number was wrong. The other one had the right title, but wrong year and wrong authors, the third and 4th had wrong year. Only the 5th was a reference to an existing paper. None of the 5 had anything to do with the specific question I asked. All papers are in open access. It missed a bunch of papers, also in open access, that actually address the topic.

Then I asked it if it could provide a passage in the 1st paper that addresses my question. The answer was "Unfortunately I do not have access to the full text of the May et al. (2014) paper. However, here is the relevant excerpt from the abstract that indicates..." Well... Of course, there is no May et al. (2014) paper, there is a May et al. (2013) paper. And nope, its abstract does not indicate anything remote to that question. After I confronted it with there is no such paper, it gave its trademark "You're absolutely right, my previous citation was incorrect. I apologize for the inaccurate reference. The proper citation should have been: ... Thank you for catching my mistake. I clearly provided the wrong publication year for that study. Let me reconfirm - the proper reference is ..."

Discovery? Smartest man in the room in a few months? Well, I would be all for it. I will add another twist. There is now a shitton of garbage published in mid, low, and below sewage level open access journals. Even if Claude or any other LLM is be trained by actually scanning the available papers (which would be pretty good and powerful if done properly), it will be overflowing with that garbage. So, you need "experts" to filter the garbage out. See the problem?

I agree with just about everything you're saying... But in a some ways I think you're making my point. consider:

this game is all about looking over the horizon.
 
I agree with just about everything you're saying... But in a some ways I think you're making my point. consider:

this game is all about looking over the horizon.
Thanks, I think I've seen that video. Not sure how to get into Gemini directly, if at all possible. It is supposed to be incorporated into Bard. In the same example I used with Claude, Bard is hallucinating even crazier, though gave a couple of good answers (unlike Claude). I asked it for a link to one of the papers it hallucinated and it happily gave one, to a completely different, earlier paper (meaning that paper could not even have been cited in the one it linked me to). I told it was wrong. It apologized profusely, corrected itself, and gave the same link [[621]] Yes, it is entertaining :) The problem is, if an AI produces 1000s of papers as in the video you linked, it is even more difficult to figure out which ones are right and which ones are irrelevant or totally hallucinated. But yes, 1000s of results in seconds!!! I am curious why they do not incorporate any simple checks, which should be easy to implement in these kind of queries: does the link produce at least the same title? And what's more puzzling, is that a simpler search query in google scholar gives a much better set of papers, not all relevant, but at least none hallucinated. What is scary about all this is the confidence in these models.
 
Can you think of any set of circumstances where you WOULDN'T think this is the case?
I thought about it for a few minutes, and I wasn't able to. I'm generally fascinated at the ability of the general public to clearly see fuckery in one area, and be blind to parallel others. Like akin to colorblindness. Example: I'm embarrassed about the length of time it took me to realize that Trump is not an "outsider". I was color blind to it.
(Is your position testable/falsifiable?)
Maybe by comparison.
Here's a list of utilities for which we the public generally assume there's a secret higher or more-advanced form we don't have access to.
-Weaponry
-Medicine (least likely item of this list to be true, otherwise the rich/powerful would appear much healthier than the poor)
-Literature
-Status of space exploration
-Status of communication with alien or interdimensional entities
-Education
-Food

I could go on..
For each of those items, if i asserted that there's a secret higher quality form that we don't get to see, you could call me a conspiracy theorist, and you'd probably be correct, but that doesn't make it any less likely.
On that basis, I think "A Secret Advanced AI" would belong on that list with almost the same certainty as the rest.
 
Back
Top