The Chinese Room Thought Experiment

0 downloads 0 Views 50KB Size Report
The Chinese Room Thought Experiment: Far Out? Bob Damper (University of Southampton). Few publications in artificial intelligence, cognitive science and the ...
The Chinese Room Thought Experiment: Far Out? Bob Damper (University of Southampton) Few publications in artificial intelligence, cognitive science and the philosophy of mind have stirred up as much controversy as the celebrated Chinese Room argument CRA) of John Searle (1980). To some extent, Searle set his own agenda for the subsequent debate by outlining and answering now-famous objections to the CRA (the systems reply, the robot reply, the brain simulator reply, ... ) yet remarkably little attention has been paid to assessing the CRA as a thought experiment per se Two notable exceptions are Cole (1984, 1991) and Brown (1991). Brown credits Wilkes (1988) with allowing that thought experiments are useful in physical science but not in the philosophy of mind. The problem is that the latter “take us too far from reality”. Thought experiments work by evoking intuitions which, although they can be extremely useful in many cases, can also be misleading. Wilkes's concern is that imaginary cases in domains such as personal identity or moral philosophy which are wildly implausible and/or lack sufficient background definition can call into play unreliable if not erroneous intuitions. These can be recognized as the “far out antifallacy”' and the “fallacy of undersupposing”, respectively, of Sorensen (1992). Brown argues mildly Wilkes in the specific instance of the CRA, saying “... there is enough background information to legitimize ... Searle’s Chinese room thought experiment”. However, he doesn't present any arguments to support this position, and many would feel that Searle signally does fail to give enough background. Contrary to Brown, Cole (1984} attacks the CRA as a thought experiment on two grounds: (1) It is not clear (in spite of Searle’s denials) that the human in the Chinese room does not understand; (2) There is an important disanalogy between the machine simulation of human performance and the human simulation of machine performance. He argues that “a fallacy of composition is at work here” and is likely to occur “whenever one takes the perspective of the subsystem or constituent”. Cole has subsequently {1991} refined this to his multi-personality reply to the CRA. That is, a mind realised by running a computer program as Searle envisages would be a new entity, logically distinct from the person or computer executing the instructions. This is strongly reminiscent of the well-known arguments of Dennett and Hofstadter (e.g., Hofstadter and Dennett 1981}. In his introduction to Views into the Chinese Room, a recent review of the CRA and its two-decade history, Preston (2002) attempts to dismiss certain “misunderstandings ... which should be quashed from the start”. One is that “ Searle’s scenario is unrealistic ...''; Preston believes that it is in principle irrelevant that the human simulator would have to work at unimaginable speed or might be unable to memorise the programs in question (cf. the far out antifallacy). Yet the CRA turns on Searle’s claims to know what it would be like to be the human simulator and, further, claims that all readers know it too, by virtue of being human. Yet he cannot know this. It is too far beyond our experience. Preston also considers briefly the fact that the CRA is (rather obviously) a thought experiment. He sees nothing remarkable in this; apparently, to do so is another ‘misunderstanding’. According to Preston, a thought experiment reflects on what would follow in some counterfactual situation and “in this respect it does not differ from Einstein's request for us to imagine what we would observe if, per impossible, we were riding on the front of a beam of light”. But is this really the case? Are thought experiments in physics, where we have a sound body of extant theory to guide us, the same sort of enterprise as thought experiments in the philosophy of mind and cognitive science, where no such theoretical underpinnings exist? According to Sorensen 1992, an antifallacy is a good inference rule that looks like a bad one. He specifically deals with objections to a thought experiment on the grounds that it is “too unrealistic”' or “too bizarre” under the name the ``far out antifallacy'. This is at the heart of many

AI scientists' rejections of the CRA (e.g., French 2000; Brooks 2002) So is it really an antifallacy, in and of itself? Sorensen certainly believes so; pointing out the popularity of this objection, he calls it the “master antifallacy ... the rich man's version”. He avers that a demonstration that the supposition of a thought experiment suffers from the ‘right’ kind of impossibility constitutes a legitimate and successful attack, but it is not easy to see precisely what he means by this. He seems to have in mind that there are different kinds of impossibility (logical, physical, practical, ...) and only logical impossibility is the right kind. Well, if so, there are certainly many who believe that Searle describes a logical impossibility. Although this may seem to give Searle an easy out (“There, I told you strong AI was impossible!”') it is only Searle’s conception of strong AI that is refuted and many believe this to be a straw man (e.g., Harnad 2002). In what Sorensen calls the fallacy of undersupposing, the designer of the \TE\ fails to be specific enough, with the consequence that the audience:“ ... unwittingly read in extraneous details. If their creativity leads them to supply diverging details, they become embroiled in a dispute or seduced into a consensus that is merely verbal'. This seems to me to be a very fair characterisation of the CRA debate over the last 20-odd years! So what specific details has Searle left out? A reasonable answer to this question is everything! Searle prides himself on what he sees as the conciseness of his CRA, yet it is concise just because it remains silent on the internal workings of the AI program, its underlying assumptions, how it handles world knowledge in such a way as to cope with the frame problem, how it is able to answer context-dependent questions (like “what was the question that I asked just before the last one?”'), and so on. And, of course, Searle cannot supply these details because he has no idea how to construct an AI program capable of passing the Turing test; no one does. References Brooks, R. A. (2002). Robot: The Future of Flesh and Machines. London, Penguin. Brown, J. R. (1991). The Laboratory of the Mind: Thought Experiments in the Natural Sciences (1993 paperback ed.). London and New York: Routledge. Cole, D. (1984). Thought and thought experiments. Philosophical Studies 45, 431-444. Cole, D. (1991). Artificial intelligence and personal identity. Synthese 88, 399-417. French, R. M. (2000). The Chinese room: Just say “no”'! In Proceedings of 22nd Annual Cognitive Science Society Conference, Philadelphia, PA, pp. 657-662. Harnad, S. (2002). Minds, machines and Searle 2: What's wrong and right about the Chinese room argument. See Preston and Bishop (2002), pp. 294-307. Hofstadter, D. R. and D. C. Dennett (1981). The Mind's I: Fantasies and Reflections on Self and Soul. Brighton, Harvester Press. Preston, J. (2002). Introduction. See Preston and Bishop (20002), pp.1-50. Preston, J. and M.~Bishop (Eds.) (2002). Views into the Chinese Room: Essays on Searle and Artificial Intelligence. Oxford: Clarendon Press. Searle, J.~R. (1980). Minds, brains, and programs. Behavioral and Brain Sciences 3 (3), 417-457.(Including peer commentary). Sorenson, R. A. (1992). Thought Experiments. New York: Oxford University Press. Wilkes, K. V. (1988). Real People: Personal Identity without Thought Experiments. Oxford: Clarendon.