Discussion - Can we produce synthetic, non-organic sentience?

  • Thread starter Sciborg_S_Patel
  • Start date
S

Sciborg_S_Patel

So this is likely breaks down into (at least) two questions:

1) Can programs be sentient/conscious entities?

2) Can androids be sentient/conscious entities?

I'd reply No to 1), because I don't think programs alone can fix meanings. I think any stand-alone piece of a mechanistic system (like a Turing machine) has infinite possible meanings, and I think trying to fix that piece's meaning in terms of other pieces only leads to infinite regress.

I'd reply Maybe to 2), as I think a synthetic brain could at least solve the easy part of the Hard Problem, namely the structures that are necessary for presence of consciousness. It might be that these structures suffice to evoke consciousness from the proto-consciousness in matter, to evoke an alter from Mind@Large, etc. I don't think you need to figure out which metaphysical "-ism" is correct to produce a sentient life form. I mean we can have kids after all?
 
Last edited by a moderator:
So this is likely breaks down into (at least) two questions:

1) Can programs be sentient/conscious entities?

2) Can androids be sentient/conscious entities?

I'd reply No to 1), because I don't think programs alone can fix meanings. I think any stand-alone piece of a mechanistic system (like a Turing machine) has infinite possible meanings, and I think trying to fix that piece's meaning in terms of other pieces only leads to infinite regress.

I'd reply Maybe to 2), as I think a synthetic brain could at least solve the easy part of the Hard Problem, namely the structures that are necessary for presence of consciousness. It might be that these structures suffice to evoke consciousness from the proto-consciousness in matter, to evoke an alter from Mind@Large, etc. I don't think you need to figure out which metaphysical "-ism" is correct to produce a sentient life form. I mean we can have kids after all?
You realise that someone will want to pin you down on definitions here ;) "Sentient" and "conscious" for starters. I think we need to set aside "human-ness" for reasons I've sketched out in the Ex Machina thread.

http://www.skeptiko-forum.com/threa...m-of-consciousness-300.2951/page-2#post-83199
 
heh, yeah as soon as i wrote it i realized i would get called out on the terms.

when you say "set aside" do you mean separate sentience from humanity? or that it might actually take "raising" such an entity to give it sentience?
 
I don't think consciousness is a result of deterministic computation. A switch is not conscious and adding together a lot of them will not produce subjective experiences of, for example, of seeing what blue looks like, or feeling what happy feels like. Consciousness is not physical, it is not limited by time or distance. Matter depends on consciousness not vice versa. Consciousness is not just memory, it is knowing that you are aware of a memory. The brain might be able to produce a representation of a thought, just as ink and paper can represent thoughts, but the paper is not aware of what is written on it and the subjective experience of knowing a thought cannot be produced by the brain.
http://ncu9nc.blogspot.com/2012/08/the-materialist-explanation-of.html


However the human body is made out of matter, and the brain filters consciousness, and it might be possible to build vehicles (androids?) that can contain consciousness the way the human body does but that use a different technology.


" ... consciousness is fundamental and all matter is dependent on consciousness for its existence. ... Double slit experiments, quantum entanglement, and the quantum Zeno effect demonstrate this role ... [there are] many other independent forms of evidence demonstrating that consciousness is not produced by the brain. Consciousness cannot be produced by any physical process. How could the changing concentration of ions across the membranes of brain cells produce what the color blue looks like to you? The brain might store data about the wavelength of light falling on the retina, or it might perform calculations on that data, but how could a computational device produce the subjective experience of what a color looks like? Consciousness is fundamentally different from any physical property or process and therefore cannot be produced by the brain.
 
Last edited:
It might be that these structures suffice to evoke consciousness from the proto-consciousness in matter, to evoke an alter from Mind@Large, etc. I don't think you need to figure out which metaphysical "-ism" is correct to produce a sentient life form. I mean we can have kids after all?

I'm not sure what you mean by proto-consciousness so I might not disagree with you but.... I don't believe all consciousness arises from proto-consciousness in this physical universe because the universe seems to be fine-tuned to support life by a super (not proto) intelligence that is not part of the physical universe.(The multiverse theory doesn't explain the fine tuning.) I don't think the idea that you need matter to develop consciousness is right because you would have a chicken and egg problem. In reality you need a developed consciousness first before you can have a physical universe with matter in it.

More here
http://ncu9nc.blogspot.com/2015/04/video-guillermo-gonzalez-on-fine-tuning.html
and here:
http://ncu9nc.blogspot.com/p/62014-...-afterlife.html#articles_by_subject_cosmology
 
Last edited:
I just meant that just as we have kids we accept as conscious entities even if we can't figure out consciousness, it's also conceivable we could emulate the human (or animal brain) well enough using non-organic materials that it would arguably be unfair to not give such entities some level of personhood. (And thus at least some of the rights we give people.)

I was just listing the possible ways we might explain such an android under various metaphysics. In panpsychism we could argue the android successfully integrated information if we're IIT-ers (assuming we associate IIT with panpsychism which IIRC Koch does?), or we might say the microtubule structure suffices as a lattice. (In fact Hammerroff thinks that even if Orch-OR is true we might uploaded minds this way.)

Under Idealism if we're agreeing with Kastrup that our bodies are projections of the consciousness process we'd have to disagree with his conclusion that an entity must be biological in order to be considered a genuine alter of Mind@Large....but it still seems to me that one could argue things that way if we're so inclined.

In fact under Neutral Monism (at least a dual-aspect version) we might say something similar to the Idealist picture, that the brain's structure is what consciousness "looks like" from a 3rd person perspective.

Admittedly not every metaphysics could incorporate androids as conscious entities, but it seems enough can that accepting such a being as sentient doesn't confine one to abandon immaterialist metaphysics altogether.
 
Last edited by a moderator:
heh, yeah as soon as i wrote it i realized i would get called out on the terms.

when you say "set aside" do you mean separate sentience from humanity? or that it might actually take "raising" such an entity to give it sentience?

We all bring our preconceptions and biases to this question. Perhaps if I was being provocative I would say any sentience can only emerge through input experiences.

Given that we are born with little, if any, sentience (depending on your definition), human sentience certainly appears to emarge with human experience. Take away our experience, take away our history.... really, what else is left?
 
So this is likely breaks down into (at least) two questions:

1) Can programs be sentient/conscious entities?

2) Can androids be sentient/conscious entities?

I'd reply No to 1), because I don't think programs alone can fix meanings. I think any stand-alone piece of a mechanistic system (like a Turing machine) has infinite possible meanings, and I think trying to fix that piece's meaning in terms of other pieces only leads to infinite regress.

I'd reply Maybe to 2), as I think a synthetic brain could at least solve the easy part of the Hard Problem, namely the structures that are necessary for presence of consciousness. It might be that these structures suffice to evoke consciousness from the proto-consciousness in matter, to evoke an alter from Mind@Large, etc. I don't think you need to figure out which metaphysical "-ism" is correct to produce a sentient life form. I mean we can have kids after all?

Can? Yes. And yes. Because primary consciousness is open-ended and infinitely expressive.

And if by android you mean an inorganic physical entity - those are in existence in other "places" in what we view as our current time-frame. And no, I'm not looking for anyone to accept that in my say-so.

For me then the question is - will humans be part of the emergence of other forms of #2 within the next fifty years?
 
So this is likely breaks down into (at least) two questions:

1) Can programs be sentient/conscious entities?

2) Can androids be sentient/conscious entities?
I don' t understand the difference between 1 and 2.

As I see it, the term android simply means things are packaged into a container with a particular shape. Using a box of a particular shape doesn't alter the underlying mechanisms, whatever they may be.
 
I don' t understand the difference between 1 and 2.

As I see it, the term android simply means things are packaged into a container with a particular shape. Using a box of a particular shape doesn't alter the underlying mechanisms, whatever they may be.
I would say a huge part of human sentience is the use of our body and senses to interact with our environment. An android may be better equiped than a program to reach that state.
 
I don' t understand the difference between 1 and 2.

As I see it, the term android simply means things are packaged into a container with a particular shape. Using a box of a particular shape doesn't alter the underlying mechanisms, whatever they may be.

That's a fair argument. For my own part I see the first as akin to a book or abacus, obviously not considered sentient.

The latter I think would fall under the same dismissal save for the possibility that in a way unknown to us the structures of the brain evoke consciousness. Part of my intuition on this is I expect there to be quantum effects involved in some way with regard to the evocation of consciousness, for example via microtubules or McFadden's combination of quantum effects and the brain having its own EM field.
 
That's a fair argument. For my own part I see the first as akin to a book or abacus, obviously not considered sentient.

The latter I think would fall under the same dismissal save for the possibility that in a way unknown to us the structures of the brain evoke consciousness. Part of my intuition on this is I expect there to be quantum effects involved in some way with regard to the evocation of consciousness, for example via microtubules or McFadden's combination of quantum effects and the brain having its own EM field.
Thanks for the clarification. I wasn't so much offering an argument as seeking clarification, I actually didn't know what was intended. Clearly there are underlying assumptions but we may not all be starting from the same set of assumptions. I've no real quibble either way.
 
We all bring our preconceptions and biases to this question. Perhaps if I was being provocative I would say any sentience can only emerge through input experiences.

Given that we are born with little, if any, sentience (depending on your definition), human sentience certainly appears to emarge with human experience. Take away our experience, take away our history.... really, what else is left?

How would you explain instinctive knowledge? Humans and, more notably, many animals are born with sophisticated yet unlearned abilities and knowledge. There is no experience to draw upon. I've watched a lamb get up and walk within seconds of birth. I'd guess that anyone familiar with robotics would tell us that the mere mechanics of standing, balancing and walking is quite a challenge to the best engineers.

Saying that these skills are "hard wired" is just not good enough, in my opinion. In your terms, Malf, skills would come from practice and experience. What process in the embryo development "programs" this and where do the programs reside? A computer and the bootstrap function is just an analogy - where is the ROM chip in the animal? This interesting article in Scientific American discusses instinct and also "acquired savants" under the heading of "genetic memory" but does not describe the process of coding and storing those memories. It seems to be taken for granted that DNA encodes the memories somehow but I have yet to find a description of how that happens. Again, the article states the following as if it is all the answer we need:

Genetic memory, simply put, is complex abilities and actual sophisticated knowledge inherited along with other more typical and commonly accepted physical and behavioral characteristics. In savants the music, art or mathematical “chip” comes factory installed.

This, of course, leads to yet another ID vs NS debate which is surely not what Sciborg intended when he started the thread. I'm just responding to Malf's point. It seems that instinct is still quite a contentious issue in the mainstream and falls into the category of "we don't really know yet".
 
How would you explain instinctive knowledge? Humans and, more notably, many animals are born with sophisticated yet unlearned abilities and knowledge. There is no experience to draw upon. I've watched a lamb get up and walk within seconds of birth. I'd guess that anyone familiar with robotics would tell us that the mere mechanics of standing, balancing and walking is quite a challenge to the best engineers.

Saying that these skills are "hard wired" is just not good enough, in my opinion. In your terms, Malf, skills would come from practice and experience. What process in the embryo development "programs" this and where do the programs reside? A computer and the bootstrap function is just an analogy - where is the ROM chip in the animal? This interesting article in Scientific American discusses instinct and also "acquired savants" under the heading of "genetic memory" but does not describe the process of coding and storing those memories. It seems to be taken for granted that DNA encodes the memories somehow but I have yet to find a description of how that happens. Again, the article states the following as if it is all the answer we need:



This, of course, leads to yet another ID vs NS debate which is surely not what Sciborg intended when he started the thread. I'm just responding to Malf's point. It seems that instinct is still quite a contentious issue in the mainstream and falls into the category of "we don't really know yet".
Hi Karmaling. Did I read somewhere that you have made a permanent move to Godzone? I hope you are enjoying a warm welcome. Haere Mai!

Innate animal behaviour is a fascinating topic, but probably not a good example of sentience. In fact the very idea of something innate and instinctive implies Stimulus/response - the removal of freewill and almost the opposite of what we mean by sentience!

Worth exploring in its own thread though perhaps?
 
I'm in your green and pleasant land on holiday right now, Malf (last 2 days of a six week stay, sadly). But I have applied for residence and my application has been forwarded to higher powers with a positive recommendation so I should know for sure in a couple of weeks. If I'm in, I'll be back for good towards the end of this year.

Yes, I do think instinct and inherited behaviour is a fascinating subject in its own right. It was the subject of my first post on the old forum but didn't get much traction. I remember being told by Paul & Linda to go off and read some Dawkins.

I was probably thinking more about innate knowledge than sentience but I get your point. Incidentally, I googled the definition of sentient (Merriam-Webster):
Full Definition of sentient

1: responsive to or conscious of sense impressions <sentient beings>
2: aware
3: finely sensitive in perception or feeling

It is #2 that is the problem for programmed automatons - can they be aware in the sense that we are aware? Isn't this the old philosophical zombie argument?
 
So this is likely breaks down into (at least) two questions:

1) Can programs be sentient/conscious entities?

2) Can androids be sentient/conscious entities?

I'd reply No to 1), because I don't think programs alone can fix meanings. I think any stand-alone piece of a mechanistic system (like a Turing machine) has infinite possible meanings, and I think trying to fix that piece's meaning in terms of other pieces only leads to infinite regress.

I'd reply Maybe to 2), as I think a synthetic brain could at least solve the easy part of the Hard Problem, namely the structures that are necessary for presence of consciousness. It might be that these structures suffice to evoke consciousness from the proto-consciousness in matter, to evoke an alter from Mind@Large, etc. I don't think you need to figure out which metaphysical "-ism" is correct to produce a sentient life form. I mean we can have kids after all?
I agree with your reaction to Q1, but I feel quite differently about Q2, because I don't think there is such a thing as the easy problem. These should probably be dubbed "Not obviously provably hard problems". I think all our normal thought is saturated with reasoning about qualia, and that the 'easy' problems are really just as hard.

For a machine to be conscious, it would have to do what I think the brain does - some peripheral processing and some form of communication. Computerised thought is simply nonsense.

David
 
Is a self driving car aware of its environment?

Could it react to the qualia of a traffic light?
 
Is a self driving car aware of its environment?

Could it react to the qualia of a traffic light?
I would take a bet that if these things are ever permitted on the ordinary roads, they will soon be abandoned. There are so many different situations you have to deal with when driving. For example:

Horse crosses road

Horse with rider, but the horse is very twitchy.

Wagon with loose top - will it shed some of its load?

Children on the pavement (sidewalk) that seem too engrossed in something to pay attention to the traffic.

Child on skate board.

Child about to get on skateboard.

The list goes on and on, and the software inside such a vehicle would have to recognise all of these in order to be reasonably safe.

It is a great example of the problems with AI.

David
 
Is a self driving car aware of its environment?

Could it react to the qualia of a traffic light?

I'm not sure what you mean by the qualia of the traffic light. Surely there is a physical (measurable) property to the light (red/green) and there is a subjective quality that a human would "feel" but a program and associated mechanicals would not? So the car might detect and react to the change of light but does the red feel like anything? Is the red it detects accompanied by a mental phenomenon in the same way that red is in your mind? I suspect that for the machine it is no more than a trigger to switch a logic gate from one state to another.
 
I would take a bet that if these things are ever permitted on the ordinary roads, they will soon be abandoned. There are so many different situations you have to deal with when driving. For example:

Horse crosses road

Horse with rider, but the horse is very twitchy.

Wagon with loose top - will it shed some of its load?

Children on the pavement (sidewalk) that seem too engrossed in something to pay attention to the traffic.

Child on skate board.

Child about to get on skateboard.

The list goes on and on, and the software inside such a vehicle would have to recognise all of these in order to be reasonably safe.

It is a great example of the problems with AI.

David

So, to take the example of the child - the car's program might be sophisticated enough to recognise a child/skateboard and anticipate movement into the road but would it get a fright the way a human might. Would it have concern for the child?

A human driver might spot a large balloon floating into the road and drive right at it, having no concern for the safety of the plastic object. Not so with a child who would bring on an emotional response which would result in evasive action.
 
Back
Top