Well the problem is that we don't really want this to degenerate into all the arguments about Turing tests. We want to decide if materialism implies that moving the logic from wetware to computer should affect consciousness.
I get that, but that is not what I am trying to do. I simply do not think that a static state can represent consciousness if it is a dynamic process.
The brain is actually never completely inactive, the closest we come to your static brain state is maybe anesthesia.
Well I am repeating myself from before, but the problem is that each time the program runs, it presumably causes the simulant to experience emotions again.
That is the logical conclusion, but I do not see what the problem is with that.
The big problem with that, is that the concept of executing the program isn't really very well defined - I mean, the program takes no input, so the compiler can pre-compile as much or as little of it as it likes - even down to the point that all the program did was output the results.
Furthermore, the program simply runs to verify a pre-existing fact: P=>O - so does the program even need to run at all?
I do not agree with that at all.
However, the logical conclusion stays the same whether I agree or not. We may have that discussion later.
We now need to assume that no matter how we get to O, at point O the subject will have the memory of having experienced everything that happened from point P to point O. If we assume a physical mind, we also have to assume she has the feeling of time having gone by.
Yet at no point in that timespan, looking at any state at any time, are we going to detect consciousness. That means consciousness is not immediate.
We could say that consciousness only exists as memory.
My speculation would be that we put everything into some sort of narrative. Not that we tell this narrative to our 'self', but more like if the narrative is the self.
Also, computer simulations are not normally supposed to do anything extra. A simulation of an oil well isn't supposed to gush oil, etc. So why precisely should this simulation actually cause emotion to be felt?
I thought the idea of this TE was to see if we could find a good reason why the simulant would not feel emotion.
I simply want to follow through the materialist concept of the brain. That seems to say that any process logically equivalent to what goes on in the brain would also be conscious.
Exactly.
So if I assume materialism, I think it is reasonable to assume the simulant is conscious (if not, why not).
Nicely put.
So I don't need to test this, but to explore what this implies.
David
I am not proposing any test, i am claiming that at the arbitrarily chosen static state O no consciousness is present. If we would run the simulation past the point O three days later, the subject would not experience any interruption in time.