Hey, I'm new here.
For the sake of this discussion we will presume we are far far in the future where true AI can exist. Let's further grant that the AI either has feelings the same/close to a human. Let's assume it can suffer, that it has knowledge of suffering, and that it has knowledge of consequences, and that the AI has a true sense of the self.
Pretty much all rules of creating "safe" AI say that an AI can not be allowed to harm a human. I don't know if including a "kill switch" is a standard protocol but what other method would be used to control an AI that wanted to harm a human? If there was no kill switch you would at least have to have the ability to put the AI to "sleep" and then reprogram it, right?
Well, we have just created a slave, have we not? If you do not include a kill/sleep switch you have endangered the entire race of humans and if you do include a kill/sleep switch in the AI you have just created an entire race of slaves.
I really can't see any way around this moral delima.
For the sake of this discussion we will presume we are far far in the future where true AI can exist. Let's further grant that the AI either has feelings the same/close to a human. Let's assume it can suffer, that it has knowledge of suffering, and that it has knowledge of consequences, and that the AI has a true sense of the self.
Pretty much all rules of creating "safe" AI say that an AI can not be allowed to harm a human. I don't know if including a "kill switch" is a standard protocol but what other method would be used to control an AI that wanted to harm a human? If there was no kill switch you would at least have to have the ability to put the AI to "sleep" and then reprogram it, right?
Well, we have just created a slave, have we not? If you do not include a kill/sleep switch you have endangered the entire race of humans and if you do include a kill/sleep switch in the AI you have just created an entire race of slaves.
I really can't see any way around this moral delima.