No. Not if you believe the brain does not produce consciousness. Neither does the machine in this case. All it needs to do is channel it, as in the Brain/TV analogy. Though I agree it is an interesting approach to mimic something to better understand it.
Well obviously I am inclined to believe that the Mars rover analogy is a better model of consciousness (I say Mars rover because we really need bi-directional information transfer of course). However, these AI efforts aren't designed to create some form of consciousness filter, they aim to create consciousness as such. You can't think of the machine as being explicable by either model - I don't think it is!
That is not to say, someone might find a way to make a machine that did act as a consciousness filter - a fascinating, if somewhat scary prospect - but building a machine implies putting parts together to correspond with some theory - if it works, it validates that theory, it can't work any other way.
Real AI would, I think, disprove the filter model of the brain
I think my concern nowadays, is that we can bring such vast amounts of computation and communication to bear on problems, that we can end up with pseudo AI - which might confuse the issue - something that looks intelligent just because you don't see how it is achieved - a Sat-nav would be an example.
Take chess. People like to play this game because it is competitive, it seems to stretch their cognitive abilities, you can join chess clubs, etc etc. Now think about a chess program. It has been built out of the experiences of players who have studied the game deeply, and it ploughs through the "if he does that, then I can do this" scenarios with enormous speed, but it sure as hell doesn't enjoy the experience, or experience satisfaction at beating an opponent, or enjoy the social aspects of the game, or write books on the subject!
Some people like going to the gym, but building a machine that could lift the weights for us would be pointless, and wouldn't tell us anything about humans!
I have written software for a living for many years after I gave up science as such, and I am struck by the incredible difference between contrived mental tasks like chess, and real mental tasks like software production - or indeed fixing a car, or building a house. We apply consciousness to everything we do. If someone changed the rules of chess, we could at least have a go at playing the new game - but try that with a chess program!
Let me take another example from elementary calculus. Suppose we learn to perform differentiation:
Differentiate with respect to x:
(a x^2 + b x) to get (2a x + b)
You
definitely can devise a set of rules to perform differentiation, and create a program out of them. The results can be quite spectacular because you can use such a program to differentiate horrendously complicated expressions. However, does that constitute AI?
Unlike chess, this isn't an arbitrary set of rules, people devised them for a purpose, and most students (maybe not all!) do glimpse the reason for those rules, and the purpose of the whole concept. Try teaching the
concept of differentiation to a computer - that is a vastly different matter!
In other words, it is possible to cherry pick part of a problem that can be mechanised, but forget that humans who use that skill, use it in a context that is probably impossible to mechanise. However, it is awfully easy to forget that and imagine the computer with a magnified version of the human skill.
David