Is creating an AI with a "kill switch" effectively creating a slave

Does AI with a kill/sleep switch create a race of slaves?

  • Yes, it does create a race of slaves

    Votes: 3 30.0%
  • No, it does not create a race of slaves

    Votes: 6 60.0%
  • Too difficult to answer

    Votes: 1 10.0%

  • Total voters
    10
#1
Hey, I'm new here.

For the sake of this discussion we will presume we are far far in the future where true AI can exist. Let's further grant that the AI either has feelings the same/close to a human. Let's assume it can suffer, that it has knowledge of suffering, and that it has knowledge of consequences, and that the AI has a true sense of the self.

Pretty much all rules of creating "safe" AI say that an AI can not be allowed to harm a human. I don't know if including a "kill switch" is a standard protocol but what other method would be used to control an AI that wanted to harm a human? If there was no kill switch you would at least have to have the ability to put the AI to "sleep" and then reprogram it, right?

Well, we have just created a slave, have we not? If you do not include a kill/sleep switch you have endangered the entire race of humans and if you do include a kill/sleep switch in the AI you have just created an entire race of slaves.

I really can't see any way around this moral delima.
 
#4
I agree.

It sounds like anthropomorphism to me.

Yes, but that is a separate debate, for the sake of this debate:

"For the sake of this discussion we will presume we are far far in the future where true AI can exist. Let's further grant that the AI either has feelings the same/close to a human. Let's assume it can suffer, that it has knowledge of suffering, and that it has knowledge of consequences, and that the AI has a true sense of the self."



Unless, you are saying that it is 100% impossible that an AI could ever have human like thoughts/feelings.
 
#5
Yes, but that is a separate debate, for the sake of this debate:

"For the sake of this discussion we will presume we are far far in the future where true AI can exist. Let's further grant that the AI either has feelings the same/close to a human. Let's assume it can suffer, that it has knowledge of suffering, and that it has knowledge of consequences, and that the AI has a true sense of the self."



Unless, you are saying that it is 100% impossible that an AI could ever have human like thoughts/feelings.
I think it's highly unlikely. I wouldn't say for sure impossible but given the way AI works, which is to simulate human behaviour, I'd say even in this case it's a simulation. It sounds like what you're describing is a person, which I don't think will happen because of the nature of a living being as opposed to an artificial entity which simulates human behaviour. It's just my opinion though, FWIW.

To accept the original idea though, that at some point in the future we could, in effect, create a self-aware life form, which it sounds like you're describing, I can't see how it would be right to terminate it without its consent. Biological life already has lots of "off-switches" though doesn't it? Sometimes we can use them without consent and sometimes not.
 
Last edited:
#6
Unless, you are saying that it is 100% impossible that an AI could ever have human like thoughts/feelings.
There are two problems here. The first is that by definition, AI doesn't attempt to embrace anything to do with feelings and emotions. To take everyday examples, a machine which plays chess or a self-driving car. In neither case do we even remotely consider there is anything, not even the beginnings, of emotion or feeling, the ability to feel joy or suffering.

So in a practical sense, when we refer to AI, its remit doesn't embrace those characteristics.

The second problem is approaching the same issue from the other direction. We know that we ourselves have the ability to feel. We usually extend this to include the animal kingdom, though we don't apply this consistently. But this is as far as our knowledge extends. We know we have the ability to feel, it is our nature. But I don't think we even begin to understand the phenomena and advances in technology haven't advanced that understanding.

I would say that the original question though nominally asking about AI is actually talking about something different, something which is not within the scope of AI. It does lay down a challenge, not in the terms nominally laid out, but in the understanding of what it even means to be able to feel, whether joy or suffering.
 

Brian_the_bard

Lost Pilgrim
Member
#7
It seems to me your hypothetical A.I. creation. Has crossed over. Or achieved enough human characteristics, to make the distinction nebulous.
But people already say things like "it's only an animal, why worry about it" Just change the word animal for the word machine and you wil see the typical human attitude towards another's suffering, just because it isn't human!
 
#8
I think it's highly unlikely. I wouldn't say for sure impossible but given the way AI works, which is to simulate human behaviour, I'd say even in this case it's a simulation. It sounds like what you're describing is a person, which I don't think will happen because of the nature of a living being as opposed to an artificial entity which simulates human behaviour. It's just my opinion though, FWIW.
yes, I was hoping to have a debate about that specific topic another day

To accept the original idea though, that at some point in the future we could, in effect, create a self-aware life form, which it sounds like you're describing, I can't see how it would be right to terminate it without its consent. Biological life already has lots of "off-switches" though doesn't it? Sometimes we can use them without consent and sometimes not.
Well a person in prison is a slave, in essence, are they not? Particularly if they can be killed by the guards at whim. Maybe many prisoners deserve that, another separate debate, but if you are compelled by force or threat of death to comply with certain requirements you are in essence a slave.
 
#9
There are two problems here. The first is that by definition, AI doesn't attempt to embrace anything to do with feelings and emotions. To take everyday examples, a machine which plays chess or a self-driving car. In neither case do we even remotely consider there is anything, not even the beginnings, of emotion or feeling, the ability to feel joy or suffering.

So in a practical sense, when we refer to AI, its remit doesn't embrace those characteristics.

The second problem is approaching the same issue from the other direction. We know that we ourselves have the ability to feel. We usually extend this to include the animal kingdom, though we don't apply this consistently. But this is as far as our knowledge extends. We know we have the ability to feel, it is our nature. But I don't think we even begin to understand the phenomena and advances in technology haven't advanced that understanding.

I would say that the original question though nominally asking about AI is actually talking about something different, something which is not within the scope of AI. It does lay down a challenge, not in the terms nominally laid out, but in the understanding of what it even means to be able to feel, whether joy or suffering.
Well, I plan on starting another thread soon about "can an AI have human thoughts/feelings" .... :)
 
#10
But people already say things like "it's only an animal, why worry about it" Just change the word animal for the word machine and you wil see the typical human attitude towards another's suffering, just because it isn't human!
Yes

Yes, I am in agreement. I think it is wrong to kill/eat animals. I am not a vegetarian but I will admit that if I wanted to be a more ethical person that this would be one of the most important steps I could take. But, the morality of vegetarians is another , separate debate. (My point is that I am agreeing with your basic idea of extending our idea of suffering.)
 
#11
yes, I was hoping to have a debate about that specific topic another day


Well a person in prison is a slave, in essence, are they not? Particularly if they can be killed by the guards at whim. Maybe many prisoners deserve that, another separate debate, but if you are compelled by force or threat of death to comply with certain requirements you are in essence a slave.
That's true to some extent but that's perhaps because you've elevated the created thing into a person. I understand why, because that's the discussion you want to have.

It depends on what you mean by "slave". My understanding is that a slave is property rather than a prisoner. I assume you mean someone held prisoner unlawfully? If you mean unlawfully then sure it happens but I don't think many people condone it or think it's ok.

Maybe it would depend how the law viewed AI? We've certainly changed our view of what constitutes a person over the years.
 
#12
That's true to some extent but that's perhaps because you've elevated the created thing into a person. I understand why, because that's the discussion you want to have.

It depends on what you mean by "slave". My understanding is that a slave is property rather than a prisoner. I assume you mean someone held prisoner unlawfully? If you mean unlawfully then sure it happens but I don't think many people condone it or think it's ok.

Maybe it would depend how the law viewed AI? We've certainly changed our view of what constitutes a person over the years.
For this debate (discussion) I am not looking for narrowly defined concepts because it is really just a thought experiment at this point. But I will answer your question as best I can. Well, slave is only a word I am using for lack of a better term. Let's say it is the complete opposite of free will, where, you have a complete lack of freedom and a maximum amount of control placed upon you.

Lets say for some crazy crazy reason a crazy deranged criminal says you have to move to St Louis and work at Wal Mart. It's up to you what job you have at Wal Mart and you can live anywhere in St Louis that you want but unless you do these 2 things I will creep into your house 1 night and kill you. This of course is just a thought experiment and obviously not a real example. Would you be a slave? Technically no. But you would have some serious restraints on the quality of your life and the options available to you. A a complete lack of freedom and a maximum amount of control placed upon you.
 
#14
Biological life already has lots of "off-switches" though doesn't it? Sometimes we can use them without consent and sometimes not.
Exactly the right response to the OP. Let's see what he has to say in response:

Well a person in prison is a slave, in essence, are they not?
Oh dear. A gross non sequitur: what in the world does prison have to do with anything that Obiwan said?

(I know, I know, I'm being unkind, maybe brutally so, and I'm sorry for that, but I feel this needs to be said. And on top of that: I am at least respecting the OP's desire to keep the debate over the viability of artificial consciousness versus artificial intelligence for a separate thread)
 
#15
I think this is an exercise in science fiction speculation, since it is extremely doubtful that AI systems with conscious self awareness (assuming this is what you mean by "true AI") will ever be developed, meaning that computer scientists will have somehow solved Chalmer's "hard problem" and embodied their solution into their machine. However, the issue is interesting to consider as a remote possible hypothetical.

I think science fiction author Isaac Asimov thought it through fairly well when he proposed his three laws of robotics, to be absolutely mandated to be implemented in all "intelligent" robots:
  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
Even these logical rules wouldn't cover situations where even a human would have difficulty due to there being a moral dilemma - for instance where the robot has to choose between saving three old people, or one child. The AI system would need to be programmed to follow some sort of defined rules in such cases, if for no other reason than to prevent it from getting into an unrecoverable loop and derangement.

A fail-safe backup for these protections would need to be implemented - something like the "sleep/kill switch" you postulate.

Certainly, intelligent and conscious robots with such deeply implanted rules would be our slaves in a sense. It seems to me that to protect ourselves we must make sure any truly conscious self aware AI systems really are slaves in these ways. We must be able to "kill" them or reprogram them at will, and they must be programmed with the four absolute command laws mentioned above. Also, even very intelligent robots which are still not truly conscious and aware would need to be programmed with these restrictions.

The morality of deliberately creating conscious beings which are partially slaves is another issue. Perhaps this is another strong reason not to ever create such artificial beings.
 
#16
Certainly, intelligent and conscious robots with such deeply implanted rules would be our slaves in a sense.
Really? In which sense? All I'm seeing is a preprogrammed and preemptory "Don't kill us who created you!" In what way is being prevented from being a murderer "slavery"?
 
#17
I answered no in the survey because until I see robots expressing real feelings that aren't artificial, i.e. programmed, I see no slavery.

We can't even say what consciousness is, never mind truly replicate it.
 
#18
I answered no in the survey because until I see robots expressing real feelings that aren't artificial, i.e. programmed, I see no slavery.

We can't even say what consciousness is, never mind truly replicate it.
This is a good point, Steve (re "programmed" feelings): those of us who have experienced consciousness naturally understand that it entails free will, i.e., that it is not (merely) programmed. How is an artificial consciousness going to gain free will when it is based on the determinism which we extract from silicon chips?
 
#19
Really? In which sense? All I'm seeing is a preprogrammed and preemptory "Don't kill us who created you!" In what way is being prevented from being a murderer "slavery"?
We are postulating that the robot is conscious and aware - which included having free will. Asimov's 3 laws would either allow the robot to desire to kill a human and then prevent it from carrying out that action, or it could be implemented to prevent it from even willing to kill a human. In the first instance it would be in physical shackles in a sense; in the latter case its very essence as a conscious agent with free will would be in shackles.
 
#20
This is a good point, Steve (re "programmed" feelings): those of us who have experienced consciousness naturally understand that it entails free will, i.e., that it is not (merely) programmed. How is an artificial consciousness going to gain free will when it is based on the determinism which we extract from silicon chips?
I think you reach too far ahead. One might ask first,
How is an artificial intelligence going to gain consciousness when it is based on logic gates on silicon chips?​
 
Top