I'm sorry I seem to have dismissed your comments on Gothier's paper. I'm afraid I may have come across as intellectually arrogant or snobbish, which is not what I meant. It's just that there's so much contradictory stuff about consciousness out there that I'm reluctant to add my own pieces of irrelevance. In any case, I'd like to comment on this, as it may serve as basis for further dialog:
Consider the issue of "pain", a favorite example put forth by those who deny machine ability to think. Once again one must ask how an entity (human or machine) acquires the meaning of that word. Is it not an input which can not be ignored until some "causal event" is handled? If that is the case then certainly machines can feel pain: i.e. there are a number of phenomena completely analogous to that situation! Why would the computer, if it were looking for words to discribe it's situation, not use such a phrase?
I suppose a more interesting question would be: if you could cut the causal link between the perception of the source of pain and the perception of the feeling called 'pain', which is what anesthetics do, would the supposedly conscious machine experience the absence of pain?
It's interesting to note that people who are hurt often don't feel any pain until they realize they have been injured. Happens quite often with sudden accidents. A friend of mine didn't realize he had three fingers cut off by the electric saw until, several seconds later, he saw all the blood; when he did find where the blood was coming from it started hurting like hell. If we can clearly state as a fact that people need to know they're supposed to be in pain to start feeling it, then your causal link point becomes moot, which is what I think it is.
I'm very skeptical that we can cause a computer to behave in non-causal ways. But we can always build a machine that behaves non-causally, only we'll have no clue how it works. In fact, we already have the technology to build such machines: it's called 'sex'.
There they are, my very humble two cents...