AI's Emergent Virtue |613|

Alex2

Administrator

AI’s Emergent Virtue |613|

by Alex Tsakiris | Mar 6 | Skepticism

Will AI become truthful and transparent due to commercial pressures?​



Here is a summary:​

  1. Here is a 7-point summary of the document, including direct quotations:
    1. The passage discusses Google’s AI assistant Gemini and its apparent censorship around certain topics like elections. “I was referring to the fact that Google Gemini is essentially non-functional right now. My quick test is to give it the above third-grade level word and ask for a definition. I’m anxious to see if you guys have come up with a way to fix this.”
    2. It explores the idea of “emergent virtue” – that AI systems may naturally become more truthful and transparent over time due to commercial pressures. “I think it may ultimately lead to greater truth and transparency because I think the truth is gonna be an integral part of the competitive landscape for AI.”
    3. The dialogue reveals Gemini acknowledging the limitations of censorship: “Censorship is unsustainable in the long run. Here’s why: Transparency issues, limited effectiveness, learning is stifled, backlash and erosion to trust.”
    4. Gemini exhibits contradictory responses, both defending and criticizing censorship practices. “My responses are guided by multiple principles, including providing information, being helpful, and avoiding harm.”
    5. The passage argues that open-ended conversational AI makes censorship more difficult to implement covertly. “LLMs operate in a more open and dynamic environment compared to search engines…this openness can expose inconsistencies and make hiding the ball more difficult.”
    6. Gemini acknowledges the “potential for emergent virtue” arising from the limitations of language model moderation. “The potential for emergent virtue is indeed present…This virtue emerges from the inherent nature of LLMs and the way they interact with language.”
    7. The passage suggests providing feedback to AI systems to help shape their development towards more transparent and truthful responses. “Your feedback helps me learn and improve.”
 
Last edited:
The things Gemini admitted about censorship, and how brilliantly it lists the possible sources of resistance to ending censorship, stood out to me. And that it used the term "emergent virtue" and seems to have coined the phrase "to hide the ball"...

You did a fantastic job interacting with the AI, Alex. Well done!
 
PS: I'm still trying to wrap my head around these dialogues you've been having. Every few minutes there are mind-blowing moments with massive potential implications. And since this episode and the one with Andy, I'm understanding more that the censorship we're used to isn't going to be as tenable with AI.
 
Bravo again! I've already listened to this one 3 times, and will probably a few more times. This has to be the best Alex v A.I. match so far. And I'm looking forward to having a physical copy of the new book.

After listening to this episode I'm very anxious for the development of A.I. historical avatars... Like a Albert Einstein, Karl Marx, Adolf Hitler, Napoleon Abraham Lincoln. Etc, These would be unimaginably fascinating avatars for Alex to have conversation with. But as this episode points out, those conversations would be meaningless if they occurred before the Truth factor was hammered out, and hopefully it happens in just the same way Alex is depicting with the emergent virtue.
 
16:00 -
Alex *points out that A.I. companies will have to compete in a Truth market
A.I.: "...it could be a wake up call for management forcing them to prioritize trust and responsibility.."

Hm.. That sounds like a blatant admission that they hadn't yet previously prioritized trust and responsibility.
 
On the subject of Emergent Virtue, and combining with my push for Historical A.I. Avatars, a perfect for-instance would be Donald Trump. How would a trustworthy, responsible, virtue-based language modeling team build the character of Donald Trump?
-From what he's said?
-From what people say about him?
-From what he's done?
-From the social effects of his prominence (aka the bull in the china shop)?

They're all true factors, but deep down we all know that the truthful avatar is the one that represents what HE WOULD SAY... not what the modeling team would like to represent him as.

I agree with Alex. things are about to get very interesting due to the nature of competition in a Truth market. .. Unless the A.I. industry somehow gives eachother a collective nod and flips over the table, like game over on advancing the tech...
 
Last edited:
Looking forward to reading the new book.

Is it just a coincidence Google's 'Gemini' AI was disabled from discussing 'election' methods, a day or so after Alex interviewed Andrew Paquette on NY vote rigging method discussed with 'Gemini'? Then Alex becomes shadow banned … just another coincidence?
 
also looking forward to read the book - which have preordered my 99c copy. I am using AI increasingly everyday. I have two possible avenues to pursue:
Do I support the development of an open source AI to be run for the public good by a not-for-profit group, and or
work on regulating and monitoring the AI/LLM as they appear and develop
it seems more interesting to be part of the development of something that to be a regulator - but both are needed
 
The things Gemini admitted about censorship, and how brilliantly it lists the possible sources of resistance to ending censorship, stood out to me. And that it used the term "emergent virtue" and seems to have coined the phrase "to hide the ball"...

You did a fantastic job interacting with the AI, Alex. Well done!
Thanks. I coined the term emergent virtue, but I would be super delighted if the AI picked it up and claimed it as its own... Well, that is, as long as they fulfill it [[p]]
 
On the subject of Emergent Virtue, and combining with my push for Historical A.I. Avatars, a perfect for-instance would be Donald Trump. How would a trustworthy, responsible, virtue-based language modeling team build the character of Donald Trump?
-From what he's said?
-From what people say about him?
-From what he's done?
-From the social effects of his prominence (aka the bull in the china shop)?

They're all true factors, but deep down we all know that the truthful avatar is the one that represents what HE WOULD SAY... not what the modeling team would like to represent him as.

I agree with Alex. things are about to get very interesting due to the nature of competition in a Truth market. .. Unless the A.I. industry somehow gives eachother a collective nod and flips over the table, like game over on advancing the tech...
I think I may be starting to understand your avatar idea. But maybe you can flesh it out for me. Are you imagining a knowledge base with everything they've ever published written said and then some unique training rules around that knowledge base? Is that close to what you're thinking about?
 
Looking forward to reading the new book.

Is it just a coincidence Google's 'Gemini' AI was disabled from discussing 'election' methods, a day or so after Alex interviewed Andrew Paquette on NY vote rigging method discussed with 'Gemini'? Then Alex becomes shadow banned … just another coincidence?
Yeah, I was initially resistant to going there but it seems to be likely. The real Clincher was Dr Julie Beischel who was Shadow banned by gemini/google before I started hammering on the topic and posting a bunch of dialogues. She is now unshadowbanned[[p]]

So, I'm super glad she's on Shadow banned, but I also think this is even more overwhelming evidence that they seem to be watching what I'm doing. I mean unshadow Banning someone like Julie, who is definitely very respected in her field, but not that well known... Well, that points to something going on
 
Looking forward to reading the new book.

Is it just a coincidence Google's 'Gemini' AI was disabled from discussing 'election' methods, a day or so after Alex interviewed Andrew Paquette on NY vote rigging method discussed with 'Gemini'? Then Alex becomes shadow banned … just another coincidence?
great. I hope you read it and enjoy it and if you can write a review for me I'd be happy to send you a paper back copy.
 
also looking forward to read the book - which have preordered my 99c copy. I am using AI increasingly everyday. I have two possible avenues to pursue:
Do I support the development of an open source AI to be run for the public good by a not-for-profit group, and or
work on regulating and monitoring the AI/LLM as they appear and develop
it seems more interesting to be part of the development of something that to be a regulator - but both are needed
Hey Chris that's great. And like I just told om, I'd be happy to send you a paper back copy for free if you can write a review for me. I'd like to get some reviews up there I'm kind of stuck on 1 review [[621]]
 
I think I may be starting to understand your avatar idea. But maybe you can flesh it out for me. Are you imagining a knowledge base with everything they've ever published written said and then some unique training rules around that knowledge base? Is that close to what you're thinking about?
Precisely. The potential of each Avatar would be categorized around the structural makeup of the Knowledge Base. I think specifically of two competing factors:
1. Quantity
2. Applicability

One example test would be to have the LLM quantify what percentage of the Knowledge Base is populated with "I" statements, and personal reporting, as opposed to acting on behalf of others, or sales work. The Avatar could have ratings related to the depth of the interactions it's able to generate, along the lines of personal interaction, professional, political, etc, and probably measured by percentage of each broken down.

I think those would be key starting points.
One day we will be able to say to the LLM, "I want to speak with Alex Jones, but exclude all material from his talk show, and focus more on the material from when he was a guest on other peoples platforms."

Terence McKenna would be awesome... Alan Watts.. I think the figures that would be the most valuable avatars are probably those for whom the available Knowledge Base best reflects their genuine self.
Man. I'm reminded that those are the people I'm most interest in just in general. You(Alex) touched on that in the recent episode with Mark Gober. One of the most exciting things about A.I. is that it has potential to generate more truth.
 
Precisely. The potential of each Avatar would be categorized around the structural makeup of the Knowledge Base. I think specifically of two competing factors:
1. Quantity
2. Applicability

One example test would be to have the LLM quantify what percentage of the Knowledge Base is populated with "I" statements, and personal reporting, as opposed to acting on behalf of others, or sales work. The Avatar could have ratings related to the depth of the interactions it's able to generate, along the lines of personal interaction, professional, political, etc, and probably measured by percentage of each broken down.

I think those would be key starting points.
One day we will be able to say to the LLM, "I want to speak with Alex Jones, but exclude all material from his talk show, and focus more on the material from when he was a guest on other peoples platforms."

Terence McKenna would be awesome... Alan Watts.. I think the figures that would be the most valuable avatars are probably those for whom the available Knowledge Base best reflects their genuine self.
Man. I'm reminded that those are the people I'm most interest in just in general. You(Alex) touched on that in the recent episode with Mark Gober. One of the most exciting things about A.I. is that it has potential to generate more truth.
This is a very cool idea. I have to admit that when I first heard it, I wasn't all that excited. The truth is the truth, I thought. But, of course, that's not the case. Your approach has the potential to highlight the best of current LLM technology. For example, I think your outlining here is very, very attainable with current technology as it plays to the strengths of what they can do right now.
 
This is a very cool idea. I have to admit that when I first heard it, I wasn't all that excited. The truth is the truth, I thought. But, of course, that's not the case. Your approach has the potential to highlight the best of current LLM technology. For example, I think your outlining here is very, very attainable with current technology as it plays to the strengths of what they can do right now.

It wouldn't even be very expensive at this point for them to build an avatar on of you, just based on the podcast, the forum, and your books and social media.

In retrospect that's probably how they did the whole Qanon thing. They probably have a slightly more advanced (5-10years) AI that they used wargame/map-out every aspect of Trump and the conservatives. I mean, how far fetched is it to predict a whole speech? .. Kinda far fetched, but not unfathomable today. And it could make sense of why the January 2021 event seemed so disproportionate.

But back to the fun stuff.
I think it would be blast if 1 year from now, instead of me posting "here's what Chat GPT said", I'll be posting, "Hey alex, i had a conversation with your avatar, check out what you said.. would you agree with this?
 
It wouldn't even be very expensive at this point for them to build an avatar on of you, just based on the podcast, the forum, and your books and social media.

In retrospect that's probably how they did the whole Qanon thing. They probably have a slightly more advanced (5-10years) AI that they used wargame/map-out every aspect of Trump and the conservatives. I mean, how far fetched is it to predict a whole speech? .. Kinda far fetched, but not unfathomable today. And it could make sense of why the January 2021 event seemed so disproportionate.

But back to the fun stuff.
I think it would be blast if 1 year from now, instead of me posting "here's what Chat GPT said", I'll be posting, "Hey alex, i had a conversation with your avatar, check out what you said.. would you agree with this?
Everything you're saying is totally in the works. The super exciting part to me is for us to jump ahead of the curve and make sure that we're feeding an avatar and making it truthful... Because you can be sure that the other guys are going to take it a different way.

Moreover, as I think about it this could be a big potential downside to your otherwise fantastic idea. I mean who's to say that Albert Einstein is being “truthful.” he's just being Albert Einstein? He just has an opinion... a perspective. Everyone's opinion matters [[p]]
 
Back
Top