Back to Home

God & Science Forum Message

Forums: Atm · Astrophotography · Blackholes · Blackholes2 · CCD · Celestron · Domes · Education
Eyepieces · Meade · Misc. · God and Science · SETI · Software · UFO · XEphem
RSS Button

Home | Discussion Forums | God and Science | Post
Login

Be the first pioneers to continue the Astronomy Discussions at our new Astronomy meeting place...
The Space and Astronomy Agora
Re: Chinese Rooms And Turing Tests

Forum List | Follow Ups | Post Message | Back to Thread Topics | In Response To
Posted by Harvey on May 15, 2003 00:16:38 UTC

Hi Kyle,

Good to see you back once and a while.

***The scenario that he presented raises the central question in the current debate amongst philosophers, neurophysiologists, cognitive psychologists and computer programmers: namely whether or not it is possible for humankind to create a machine-based intelligence that is able to achieve some level of conscious self-awareness. The first step in what may or may not be an impossible task is to have the artificial intelligence pass a Turing Test... in this initial step, the end justifies the means, in other words the functional mimicry of consciousness is what’s important at this early stage, not whether or not the AI is actually conscious and self-aware.***

Ironically, this is what Searle's Chinese Room thought experiment was designed to prove against. It is anti-AI. The point behind the Chinese Room thought experiment is that whatever the computer does, at no time does it engage in understanding what it is doing. Everything can be broken down by electronic/biological bits and bytes, and of course bits and bytes don't actually understand anything. It is all electrons moving about. However, humans do understand. There's a moment when we say "aha, now I understand what you mean and here's the Chinese or English word that means that very thing". In the case of the computer translating Chinese into English, the computer doesn't actually understand Chinese, rather it is just doing table look-ups of words from Chinese and reciting the English word or rule that is associated with that word.

One of the faults of the Searle thought experiment, from my perspective anyway, is that we are not necessarily aware of what is going on behind the surface in our thoughts. Who's to say that 'understanding' is itself a program of detailed bits and bytes that gets 'run' at some point when a solution is achieved? For example, what if when the computer performs a look-up of a Chinese word in its table, and as soon as it finds a match in the table, it then has a reference to the English word. As part of looking up the word, it runs the 'understand' program which is just a release of chemicals in our body which relax our muscles, gives us a 'high', etc, and throughout our early life we've just come to associate that chemical caused feeling with the mood of understanding. At least, this seems plausible to me.

So, I think the jury is out on whether AI is possible in the sense that 1) it is theoretically possible, and 2) it is practically possible with some machine and software that are yet to be constructed. I tend to think such AI programs are possible, but we are arguing about things that we know not.

***How do I know that you are actually conscious, Harvey (or anyone else on this forum)? The only pieces of evidence that I have are your words and the 'sense' that they make... and this evidence implies that the person responsible for those words is reasonable and intelligent, like me. I then assume that with reason and intelligence comes conscious self-awareness.***

The trick in that is defining consciousness and deciding what level of proof is necessary to come to a conclusion. If we define consciousness as something that has definite outward signs (i.e., a pragmatic definition), then we know a computer is conscious if it shows those outward signs. If we define consciousness as something that we feel, then we might be able to find the cause of the feeling (e.g., some neurotransmitter), and in that case, you might have a plausible reason for suspecting consciousness in me if my body chemistry has this neurotransmitter and I show a similar brain structure as you. On the other hand, if we get into some kind of deep concern about we actually know as a result of science (i.e., epistemological issues), then more than just questions of other people's consciousness come into doubt.

There are, of course, limitations of knowledge. This is what Godel and Turing both showed. However, I think the question of consciousness does not fall into those categories. I think of the question of consciousness as a scientific question having levels of proof far less stringent than proving whether Goldbach's conjecture is true or not (i.e., the conjecture that every even integer greater than 2 is the sum of two primes - no contradictions so far after billions of calculations, but still not proved!). In the case of a scientific question, we look for reasonable evidence to confirm or reject a hypothesis based on the predictions of the scientific model. That's how I see the dilemma of consciousness in machines, and I see it as answerable in the scientific sense. But, I'm not telling you anything new, am I?

Follow Ups:

    Login to Post
    Additional Information
    Google
     
    Web www.astronomy.net
    DayNightLine
    About Astronomy Net | Advertise on Astronomy Net | Contact & Comments | Privacy Policy
    Unless otherwise specified, web site content Copyright 1994-2025 John Huggins All Rights Reserved
    Forum posts are Copyright their authors as specified in the heading above the post.
    "dbHTML," "AstroGuide," "ASTRONOMY.NET" & "VA.NET"
    are trademarks of John Huggins