Alex,
The Chinese Room can be stated as follows:
"The Chinese Room Thought Experiment
Against "strong AI," Searle (1980a) asks you to imagine yourself a monolingual English speaker "locked in a room, and given a large batch of Chinese writing" plus "a second batch of Chinese script" and "a set of rules" in English "for correlating the second batch with the first batch." The rules "correlate one set of formal symbols with another set of formal symbols"; "formal" (or "syntactic") meaning you "can identify the symbols entirely by their shapes." A third batch of Chinese symbols and more instructions in English enable you "to correlate elements of this third batch with elements of the first two batches" and instruct you, thereby, "to give back certain sorts of Chinese symbols with certain sorts of shapes in response." Those giving you the symbols "call the first batch 'a script' [a data structure with natural language processing applications], "they call the second batch 'a story', and they call the third batch 'questions'; the symbols you give back "they call . . . 'answers to the questions'"; "the set of rules in English . . . they call 'the program'": you yourself know none of this. Nevertheless, you "get so good at following the instructions" that "from the point of view of someone outside the room" your responses are "absolutely indistinguishable from those of Chinese speakers." Just by looking at your answers, nobody can tell you "don't speak a word of Chinese." Producing answers "by manipulating uninterpreted formal symbols," it seems "[a]s far as the Chinese is concerned," you "simply behave like a computer"; specifically, like a computer running Schank and Abelson's (1977) "Script Applier Mechanism" story understanding program (SAM), which Searle's takes for his example. But in imagining himself to be the person in the room, Searle thinks it's "quite obvious . . . I do not understand a word of the Chinese stories. I have inputs and outputs that are indistinguishable from those of the native Chinese speaker, and I can have any formal program you like, but I still understand nothing." "For the same reasons," Searle concludes, "Schank's computer understands nothing of any stories" since "the computer has nothing more than I have in the case where I understand nothing" (1980a, p. 418). Furthermore, since in the thought experiment "nothing . . . depends on the details of Schank's programs," the same "would apply to any [computer] simulation" of any "human mental phenomenon" (1980a, p. 417); that's all it would be, simulation. Contrary to "strong AI", then, no matter how intelligent-seeming a computer behaves and no matter what programming makes it behave that way, since the symbols it processes are meaningless (lack semantics) to it, it's not really intelligent. It's not actually thinking. Its internal states and processes, being purely syntactic, lack semantics (meaning); so, it doesn't really have intentional (i.e., meaningful) mental states."
- Quoted from the Internet Encyclopedia of Philosophy
Warm Regards, Harv |