Joscha Bach on AI Podcast with Lex Fridman

#3
That is a 3 hour discussion! Can you give me a few ideas of what I would learn from spending that amount of time!

David
OUTLINE:
0:00 - Introduction
3:14 - Reverse engineering Joscha Bach
10:38 - Nature of truth
18:47 - Original thinking
23:14 - Sentience vs intelligence
31:45 - Mind vs Reality
46:51 - Hard problem of consciousness
51:09 - Connection between the mind and the universe
56:29 - What is consciousness
1:02:32 - Language and concepts
1:09:02 - Meta-learning
1:16:35 - Spirit
1:18:10 - Our civilization may not exist for long
1:37:48 - Twitter and social media
1:44:52 - What systems of government might work well?
1:47:12 - The way out of self-destruction with AI
1:55:18 - AI simulating humans to understand its own nature
2:04:32 - Reinforcement learning
2:09:12 - Commonsense reasoning
2:15:47 - Would AGI need to have a body?
2:22:34 - Neuralink
2:27:01 - Reasoning at the scale of neurons and societies
2:37:16 - Role of emotion
2:48:03 - Happiness is a cookie that your brain bakes for itself
 
#5
I think he dodged the Hard Problem. He seemed to change the question into why we feel sensation rather than how.

Youtube has chopped it up and inserted adds into it, so it isn't possible to use your timings - sorry.

He is obviously very intelligent, but I don't really buy his simulation argument.

Why does the interviewer, Fridman have to talk with such a slurry voice?

David
 
Last edited:
#6
When he talks about sound and colour being like oscillators that get filtered in the brain, he is obviously wrong in the case of colour - I mean nothing of the frequency of light is sent to the brain, and I don't think even sound is sent as a waveform.

David
 
#7
I think he dodged the Hard Problem. He seemed to change the question into why we feel sensation rather than how.
I agree; however, I don't think there's any other way to deal with the hard problem than to re-frame it.

He re-frames it to essentially say it is simulations all the way down.

It seems likely to me that reality is in some fundamental way "nested"... so I think the answer to some fundamental questions will be: it is ____ all the way down.

Youtube has chopped it up and inserted adds into it, so it isn't possible to use your timings - sorry.
Oh, weird... those were copied form the YouTube description, but I have the paid YouTube subscription so I don't see ads.

He is obviously very intelligent, but I don't really buy his simulation argument.
I think it is useful to hear original or semi-original philosophical ontological metaphors from intelligent people even if their worldview doesn't match up 100% with yours.

He is obviously not coming at consciousness with all the data (PSI, NDE, etc), but I think we can still make use of the simulation metaphor.

Why does the interviewer, Fridman have to talk with such a slurry voice?
Fridman has commented before how he hates the sound of his own voice and has gotten lots of negative feedback in comments about his voice, but he stuck with the podcast anyway.

I don't know... he's a native Russian speaker who grew up in Boston, and he likes Vodka?

Also, the chipper East German accent provides a lot of contrast...
 
#8
I agree; however, I don't think there's any other way to deal with the hard problem than to re-frame it.

He re-frames it to essentially say it is simulations all the way down.

It seems likely to me that reality is in some fundamental way "nested"... so I think the answer to some fundamental questions will be: it is ____ all the way down.
The trouble is, of courxe, that there is no reason to think a simulation, or even a simulation of a simulation would experience anything.
I think it is useful to hear original or semi-original philosophical ontological metaphors from intelligent people even if their worldview doesn't match up 100% with yours.

He is obviously not coming at consciousness with all the data (PSI, NDE, etc), but I think we can still make use of the simulation metaphor.
Agreed, but while it is interesting to see what he says - and he seems to accept that AI is rather limited right now - but I think it is interesting to counter what he says.
Fridman has commented before how he hates the sound of his own voice and has gotten lots of negative feedback in comments about his voice, but he stuck with the podcast anyway.
Maybe he could go a bit of voice coaching? I mean, he is clearly interested in the subject - did you notice that he brought in Donald Hoffman's ideas at one point?

David
 
#9
The trouble is, of courxe, that there is no reason to think a simulation, or even a simulation of a simulation would experience anything.
There's no reason to think anything would experience anything except that we experience stuff.

I think the hard problem is a dead end. There is a fundamental duality: experience/computation. Everything that is not an experience is a computer. Joscha snuck the duality in there when he said something about being "inside" the feedback loop. There is an inside and an outside... the fundamental duality.

Feedback loops obviously play a fundamental role in consciousness. And we can even see that in NDE's: what else is a life review but a grand-scale feedback loop. The life review is an opportunity to juxtapose goals with outcomes to see where you succeeded and where you could have done better. What other purpose could there be for such a feedback loop but to improve controls so that you can do better next time - more efficient achievement of your goals?

Maybe he could go a bit of voice coaching? I mean, he is clearly interested in the subject - did you notice that he brought in Donald Hoffman's ideas at one point?
I think Lex's podcast is one of the best there is out there despite some people's complaints about his voice. Definitely in my top 5 podcasts. Most of his guests are not as open to Psi as we are, but still a great tome of info and insights from the practical to the philosophical.
 
#12
I thought the typical hype from media was way over done for sure, but you don't think we'll ever see self driving vehicles? I'm no expert, but I generally think its still a question of when and not if.
I strongly suspect we will never see them, because to be any use they have to cope with more or less everything we manage - road works, accidents involving other cars, assorted animals doing things in the road, children walking at the side, but looking a bit distracted, a ball whizzing across the road - which may be followed by a child, drunks, deep pot holes, a BLM march, a policeman with his hand up, or waiving the car on, etc etc.

The idea that a self drive car could be useful to assist drivers, who would be called to solve awkward situations was soon squashed (at the cost of some loss of life) because people simply can't concentrate on a supervisory task like that - it would be far easier to simply drive the car!

Short of giving the software a comprehensive understanding of modern life, a programmer has to anticipate all these situations with all their variants, and provide a solution!

Naturally I don't think our minds are analogous to a piece of software - however ingenious. The theoretical physicist Roger Penrose wrote a number of books arguing that consciousness is not algorithmic, and based his conclusion of Gödel's incompleteness theorems.

The fascinating thing is that AI generated a similar vast bubble of hype in the 1980's. It lasted about 10 years and fizzled!

https://en.wikipedia.org/wiki/History_of_artificial_intelligence

Unbelievably, there was also a period in which some prominent scientists thought that clockwork mechanisms were conscious:


David
 
Last edited:
#13
Interesting. I guess I don't see autonomous vehicles as a priori a non-starter. Feels like something that ingenuity and human creativity can ultimately overcome. I myself would love to see it presuming I last to a ripe old age: the autonomy this would afford to less able groups (elderly, impaired, etc) would be remarkable.
 
#14
Interesting. I guess I don't see autonomous vehicles as a priori a non-starter. Feels like something that ingenuity and human creativity can ultimately overcome. I myself would love to see it presuming I last to a ripe old age: the autonomy this would afford to less able groups (elderly, impaired, etc) would be remarkable.
Oh there is no question that such things would be useful - you could send the car to fetch Grandma over, deliver pizzas, take you safely back from a boozy party, get dropped off at the start of a walk or bike ride, and picked up somewhere else, etc.

Ideas like that mesmerise people, but do listen to Jessica Riskin, or read her book. Her work is full of philosophical meaning and low on technical facts - though it is clear from that video that she is a aware of quite a lot of science.

BTW, J. Scott Turner recommended that book to me, after we had discussed his book " Purpose and Desire: What Makes Something Alive and Why Modern Darwinism Has Failed to Explain It" (he is supportive of Intelligent Design) as part of an email discussion.

David
 
#15
This is the start of what I have repeatedly predicted - self drive cards are a pipe-dream:

https://www.vox.com/future-perfect/...ng-cars-autonomous-vehicles-waymo-cruise-uber



David
It makes more sense to me that machine learning and ad hoc networks will enable self driving cars to exceed human capability in the near future. Driving is a machine like task that humans are not optimized for. Ad hoc networks will make it so a smart vehicle has the world mapped out from a collection of vehicles and sensors rather than merely two pairs of eyes and ears from a single POV.
 
#16
It makes more sense to me that machine learning and ad hoc networks will enable self driving cars to exceed human capability in the near future. Driving is a machine like task that humans are not optimized for. Ad hoc networks will make it so a smart vehicle has the world mapped out from a collection of vehicles and sensors rather than merely two pairs of eyes and ears from a single POV.
Yes, but while a bit of the task is as you describe, other parts require a lot of general knowledge about the world. If you drive in a way that expects the worst at any moment, then you would drive hopelessly slowly, but you are constantly modelling what you see ahead. Let's be honest, perhaps the task also reinquires some presentiment.

David
 
#17
Yes, but while a bit of the task is as you describe, other parts require a lot of general knowledge about the world. If you drive in a way that expects the worst at any moment, then you would drive hopelessly slowly, but you are constantly modelling what you see ahead. Let's be honest, perhaps the task also reinquires some presentiment.

David
Well I’m not an expert in it for sure... but I am a believer.

I’m less concerned about the capability of AI to handle the tasks required and more concerned about the drastic reduction in freedom this will result in. This will tie in seamlessly to the social credit scoring and global ID / banking, and pretty soon it will be very easy to prevent people from traveling to certain places or to confine them to certain areas. And with back doors built in, might also be very easy to take out high profile dissenters in a fiery crash.
 
#18
Well I’m not an expert in it for sure... but I am a believer.

I’m less concerned about the capability of AI to handle the tasks required and more concerned about the drastic reduction in freedom this will result in. This will tie in seamlessly to the social credit scoring and global ID / banking, and pretty soon it will be very easy to prevent people from traveling to certain places or to confine them to certain areas. And with back doors built in, might also be very easy to take out high profile dissenters in a fiery crash.
Well all I can say, is that I think AI is overblown - so your fears are that bit less worrying. AI hyped itself to the moon and back in the 1980's and it seems to have learned nothing from that. The problem I think is that the concept of a computer imitating human thought seems to offer limitless possibilities - just thinking about it leads one into wild speculation.

The self-drive cars project is exactly the thing that seems to result. The problem can't be reduced to one well defined program, to do it well enough, you have to explore all sorts of issues - such as recognising horses, and preferably also recognising their skittish behaviour when they are getting upset.

I do agree that it is best that such things will not exist, but even so, do you need such sophistication to track dissenters and such like?

David
 
#19
Well all I can say, is that I think AI is overblown - so your fears are that bit less worrying. AI hyped itself to the moon and back in the 1980's and it seems to have learned nothing from that. The problem I think is that the concept of a computer imitating human thought seems to offer limitless possibilities - just thinking about it leads one into wild speculation.

The self-drive cars project is exactly the thing that seems to result. The problem can't be reduced to one well defined program, to do it well enough, you have to explore all sorts of issues - such as recognising horses, and preferably also recognising their skittish behaviour when they are getting upset.

I do agree that it is best that such things will not exist, but even so, do you need such sophistication to track dissenters and such like?

David
Have you ever taken a Tesla for a test drive? I highly recommend it!
 
Top