Javascript Menu by Deluxe-Menu.com
MindPapers is now part of PhilPapers: online research in philosophy, a new service with many more features.
 
 Compiled by David Chalmers (Editor) & David Bourget (Assistant Editor), Australian National University. Submit an entry.
 
   
click here for help on how to search

6.1c. The Chinese Room (The Chinese Room on PhilPapers)

See also:
Adam, Alison (2003). Cyborgs in the chinese room: Boundaries transgressed and boundaries blurred. In John M. Preston & Michael A. Bishop (eds.), Views Into the Chinese Room: New Essays on Searle and Artificial Intelligence. Oxford University Press.   (Google)
Aleksander, Igor L. (2003). Neural depictions of "world" and "self": Bringing computational understanding into the chinese room. In John M. Preston & Michael A. Bishop (eds.), Views Into the Chinese Room: New Essays on Searle and Artificial Intelligence. Oxford University Press.   (Google)
Anderson, David (1987). Is the chinese room the real thing? Philosophy 62 (July):389-93.   (Cited by 9 | Google)
Andrews, Kristin (online). On predicting behavior.   (Google)
Abstract: I argue that the behavior of other agents is insufficiently described in current debates as a dichotomy between tacit theory (attributing beliefs and desires to predict behavior) and simulation theory (imagining what one would do in similar circumstances in order to predict behavior). I introduce two questions about the foundation and development of our ability both to attribute belief and to simulate it. I then propose that there is one additional method used to predict behavior, namely, an inductive strategy
Atlas, Jay David, What is it like to be a chinese room?   (Google | More links)
Abstract: When philosophers think about mental phenomena, they focus on several features of human experience: (1) the existence of consciousness, (2) the intentionality of mental states, that property by which beliefs, desires, anger, etc. are directed at, are about, or refer to objects and states of affairs, (3) subjectivity, characterized by my feeling my pains but not yours, by my experiencing the world and myself from my point of view and not yours, (4) mental causation, that thoughts and feelings have physical effects on the world: I decide to raise my arm and my arm rises. In a world described by theories of physics and chemistry, what place in that physical description do descriptions of the mental have?
Ben-Yami, Hanoch (1993). A note on the chinese room. Synthese 95 (2):169-72.   (Cited by 3 | Annotation | Google | More links)
Abstract:   Searle's Chinese Room was supposed to prove that computers can't understand: the man in the room, following, like a computer, syntactical rules alone, though indistinguishable from a genuine Chinese speaker, doesn't understand a word. But such a room is impossible: the man won't be able to respond correctly to questions like What is the time?, even though such an ability is indispensable for a genuine Chinese speaker. Several ways to provide the room with the required ability are considered, and it is concluded that for each of these the room will have understanding. Hence, Searle's argument is invalid
Block, Ned (2003). Searle's arguments against cognitive science. In John M. Preston & Michael A. Bishop (eds.), Views Into the Chinese Room: New Essays on Searle and Artificial Intelligence. Oxford University Press.   (Cited by 2 | Google)
Boden, Margaret A. (1988). Escaping from the chinese room. In Computer Models of Mind. Cambridge University Press.   (Cited by 21 | Annotation | Google)
Bringsjord, Selmer & Noel, Ron (2003). Real robots and the missing thought-experiment in the chinese room dialectic. In John M. Preston & Michael A. Bishop (eds.), Views Into the Chinese Room: New Essays on Searle and Artificial Intelligence. Oxford University Press.   (Google | More links)
Brown, Steven Ravett (2000). Peirce and formalization of thought: The chinese room argument. Journal of Mind and Behavior.   (Google | More links)
Abstract: Whether human thinking can be formalized and whether machines can think in a human sense are questions that have been addressed by both Peirce and Searle. Peirce came to roughly the same conclusion as Searle, that the digital computer would not be able to perform human thinking or possess human understanding. However, his rationale and Searle's differ on several important points. Searle approaches the problem from the standpoint of traditional analytic philosophy, where the strict separation of syntax and semantics renders understanding impossible for a purely syntactical device. Peirce disagreed with that analysis, but argued that the computer would only be able to achieve algorithmic thinking, which he considered the simplest type. Although their approaches were radically dissimilar, their conclusions were not. I will compare and analyze the arguments of both Peirce and Searle on this issue, and outline some implications of their conclusions for the field of Artificial Intelligence
Button, Graham; Coutler, Jeff & Lee, John R. E. (2000). Re-entering the chinese room: A reply to Gottfried and Traiger. Minds and Machines 10 (1):145-148.   (Google | More links)
Bynum, Terrell Ward (1985). Artificial intelligence, biology, and intentional states. Metaphilosophy 16 (October):355-77.   (Cited by 9 | Annotation | Google | More links)
Cam, Philip (1990). Searle on strong AI. Australasian Journal of Philosophy 68 (1):103-8.   (Cited by 2 | Annotation | Google | More links)
Carleton, Lawrence Richard (1984). Programs, language understanding, and Searle. Synthese 59 (May):219-30.   (Cited by 8 | Annotation | Google | More links)
Chalmers, David J. (1992). Subsymbolic computation and the chinese room. In J. Dinsmore (ed.), The Symbolic and Connectionist Paradigms: Closing the Gap. Lawrence Erlbaum.   (Cited by 29 | Annotation | Google | More links)
Abstract: More than a decade ago, philosopher John Searle started a long-running controversy with his paper “Minds, Brains, and Programs” (Searle, 1980a), an attack on the ambitious claims of artificial intelligence (AI). With his now famous _Chinese Room_ argument, Searle claimed to show that despite the best efforts of AI researchers, a computer could never recreate such vital properties of human mentality as intentionality, subjectivity, and understanding. The AI research program is based on the underlying assumption that all important aspects of human cognition may in principle be captured in a computational model. This assumption stems from the belief that beyond a certain level, implementational details are irrelevant to cognition. According to this belief, neurons, and biological wetware in general, have no preferred status as the substrate for a mind. As it happens, the best examples of minds we have at present have arisen from a carbon-based substrate, but this is due to constraints of evolution and possibly historical accidents, rather than to an absolute metaphysical necessity. As a result of this belief, many cognitive scientists have chosen to focus not on the biological substrate of the mind, but instead on the abstract causal structure_ _that the mind embodies (at an appropriate level of abstraction). The view that it is abstract causal structure that is essential to mentality has been an implicit assumption of the AI research program since Turing (1950), but was first articulated explicitly, in various forms, by Putnam (1960), Armstrong (1970) and Lewis (1970), and has become known as _functionalism_. From here, it is a very short step to _computationalism_, the view that computational structure is what is important in capturing the essence of mentality. This step follows from a belief that any abstract causal structure can be captured computationally: a belief made plausible by the Church–Turing Thesis, which articulates the power
Churchland, Paul M. & Churchland, Patricia S. (1990). Could a machine think? Scientific American 262 (1):32-37.   (Cited by 102 | Annotation | Google | More links)
Cohen, L. Jonathan (1986). What sorts of machines can understand the symbols they use? Proceedings of the Aristotelian Society 60:81-96.   (Google)
Cole, David J. (1991). Artificial intelligence and personal identity. Synthese 88 (September):399-417.   (Cited by 18 | Annotation | Google | More links)
Abstract:   Considerations of personal identity bear on John Searle's Chinese Room argument, and on the opposed position that a computer itself could really understand a natural language. In this paper I develop the notion of a virtual person, modelled on the concept of virtual machines familiar in computer science. I show how Searle's argument, and J. Maloney's attempt to defend it, fail. I conclude that Searle is correct in holding that no digital machine could understand language, but wrong in holding that artificial minds are impossible: minds and persons are not the same as the machines, biological or electronic, that realize them
Cole, David J. (1991). Artificial minds: Cam on Searle. Australasian Journal of Philosophy 69 (September):329-33.   (Cited by 3 | Google | More links)
Cole, David J. (1984). Thought and thought experiments. Philosophical Studies 45 (May):431-44.   (Cited by 15 | Annotation | Google | More links)
Cole, David J. (1994). The causal powers of CPUs. In Eric Dietrich (ed.), Thinking Computers and Virtual Persons. Academic Press.   (Cited by 2 | Google)
Cole, David (online). The chinese room argument. Stanford Encyclopedia of Philosophy.   (Google)
Copeland, B. Jack (1993). The curious case of the chinese gym. Synthese 95 (2):173-86.   (Cited by 12 | Annotation | Google | More links)
Abstract:   Searle has recently used two adaptations of his Chinese room argument in an attack on connectionism. I show that these new forms of the argument are fallacious. First I give an exposition of and rebuttal to the original Chinese room argument, and then a brief introduction to the essentials of connectionism
Copeland, B. Jack (2003). The chinese room from a logical point of view. In John M. Preston & Michael A. Bishop (eds.), Views Into the Chinese Room: New Essays on Searle and Artificial Intelligence. Oxford University Press.   (Cited by 5 | Google)
Coulter, Jeff & Sharrock, S. (2003). The hinterland of the chinese room. In John M. Preston & Michael A. Bishop (eds.), Views Into the Chinese Room: New Essays on Searle and Artificial Intelligence. Oxford University Press.   (Google)
Cutrona, Jr (ms). Zombies in Searle's chinese room: Putting the Turing test to bed.   (Google | More links)
Abstract: Searle’s discussions over the years 1980-2004 of the implications of his “Chinese Room” Gedanken experiment are frustrating because they proceed from a correct assertion: (1) “Instantiating a computer program is never by itself a sufficient condition of intentionality;” and an incorrect assertion: (2) “The explanation of how the brain produces intentionality cannot be that it does it by instantiating a computer program.” In this article, I describe how to construct a Gedanken zombie Chinese Room program that will pass the Turing test and at the same time unambiguously demonstrates the correctness of (1). I then describe how to construct a Gedanken Chinese brain program that will pass the Turing test, has a mind, and understands Chinese, thus demonstrating that (2) is incorrect. Searle’s instantiation of this program can and does produce intentionality. Searle’s longstanding ignorance of Chinese is simply irrelevant and always has been. I propose a truce and a plan for further exploration
Damper, Robert I. (2004). The chinese room argument--dead but not yet buried. Journal of Consciousness Studies 11 (5-6):159-169.   (Cited by 2 | Google | More links)
Damper, Robert I. (2006). The logic of Searle's chinese room argument. Minds and Machines 16 (2):163-183.   (Google | More links)
Abstract: John Searle’s Chinese room argument (CRA) is a celebrated thought experiment designed to refute the hypothesis, popular among artificial intelligence (AI) scientists and philosophers of mind, that “the appropriately programmed computer really is a mind”. Since its publication in 1980, the CRA has evoked an enormous amount of debate about its implications for machine intelligence, the functionalist philosophy of mind, theories of consciousness, etc. Although the general consensus among commentators is that the CRA is flawed, and not withstanding the popularity of the systems reply in some quarters, there is remarkably little agreement on exactly how and why it is flawed. A newcomer to the controversy could be forgiven for thinking that the bewildering collection of diverse replies to Searle betrays a tendency to unprincipled, ad hoc argumentation and, thereby, a weakness in the opposition’s case. In this paper, treating the CRA as a prototypical example of a ‘destructive’ thought experiment, I attempt to set it in a logical framework (due to Sorensen), which allows us to systematise and classify the various objections. Since thought experiments are always posed in narrative form, formal logic by itself cannot fully capture the controversy. On the contrary, much also hinges on how one translates between the informal everyday language in which the CRA was initially framed and formal logic and, in particular, on the specific conception(s) of possibility that one reads into the logical formalism
Dennett, Daniel C. (1987). Fast thinking. In The Intentional Stance. MIT Press.   (Cited by 12 | Annotation | Google)
Double, Richard (1984). Reply to C.A. Field's Double on Searle's Chinese Room. Nature and System 6 (March):55-58.   (Google)
Double, Richard (1983). Searle, programs and functionalism. Nature and System 5 (March-June):107-14.   (Cited by 3 | Annotation | Google)
Dyer, Michael G. (1990). Finding lost minds. Journal of Experimental and Theoretical Artificial Intelligence 2:329-39.   (Cited by 3 | Annotation | Google | More links)
Dyer, Michael G. (1990). Intentionality and computationalism: Minds, machines, Searle and Harnad. Journal of Experimental and Theoretical Artificial Intelligence 2:303-19.   (Cited by 23 | Annotation | Google | More links)
Fields, Christopher A. (1984). Double on Searle's chinese room. Nature and System 6 (March):51-54.   (Annotation | Google)
Fisher, Justin C. (1988). The wrong stuff: Chinese rooms and the nature of understanding. Philosophical Investigations 11 (October):279-99.   (Cited by 2 | Google)
Fodor, Jerry A. (1991). Yin and Yang in the chinese room. In D. Rosenthal (ed.), The Nature of Mind. Oxford University Press.   (Cited by 5 | Annotation | Google)
Fulda, Joseph S. (2006). A Plea for Automated Language-to-Logical-Form Converters. RASK: Internationalt tidsskrift for sprog og kommuinkation 24 (--):87-102.   (Google)
Millikan, Ruth G. (2005). Some reflections on the theory theory - simulation theory discussion. In Susan Hurley & Nick Chater (eds.), Perspectives on Imitation: From Mirror Neurons to Memes, Vol II. MIT Press.   (Google)
Globus, Gordon G. (1991). Deconstructing the chinese room. Journal of Mind and Behavior 12 (3):377-91.   (Cited by 4 | Google)
Gozzano, Simone (1995). Consciousness and understanding in the chinese room. Informatica 19:653-56.   (Cited by 1 | Google)
Abstract: In this paper I submit that the “Chinese room” argument rests on the assumption that understanding a sentence necessarily implies being conscious of its content. However, this assumption can be challenged by showing that two notions of consciousness come into play, one to be found in AI, the other in Searle’s argument, and that the former is an essential condition for the notion used by Searle. If Searle discards the first, he not only has trouble explaining how we can learn a language but finds the validity of his own argument in jeopardy
Gozzano, Simone (1997). The chinese room argument: Consciousness and understanding. In Matjaz Gams, M. Paprzycki & X. Wu (eds.), Mind Versus Computer: Were Dreyfus and Winograd Right? Amsterdam: IOS Press.   (Google | More links)
Hanna, Patricia (1985). Causal powers and cognition. Mind 94 (373):53-63.   (Cited by 2 | Annotation | Google | More links)
Harrison, David (1997). Connectionism hits the chinese gym. Connexions 1.   (Google)
Harnad, Stevan (1990). Lost in the hermeneutic hall of mirrors. [Journal (Paginated)] 2:321-27.   (Annotation | Google | More links)
Abstract: Critique of Computationalism as merely projecting hermeneutics (i.e., meaning originating from the mind of an external interpreter) onto otherwise intrinsically meaningless symbols. Projecting an interpretation onto a symbol system results in its being reflected back, in a spuriously self-confirming way
Harnad, Stevan (1989). Minds, machines and Searle. Journal of Experimental and Theoretical Artificial Intelligence 1 (4):5-25.   (Cited by 113 | Annotation | Google | More links)
Abstract: Searle's celebrated Chinese Room Argument has shaken the foundations of Artificial Intelligence. Many refutations have been attempted, but none seem convincing. This paper is an attempt to sort out explicitly the assumptions and the logical, methodological and empirical points of disagreement. Searle is shown to have underestimated some features of computer modeling, but the heart of the issue turns out to be an empirical question about the scope and limits of the purely symbolic (computational) model of the mind. Nonsymbolic modeling turns out to be immune to the Chinese Room Argument. The issues discussed include the Total Turing Test, modularity, neural modeling, robotics, causality and the symbol-grounding problem
Harnad, Stevan (2003). Minds, machines, and Searle 2: What's right and wrong about the chinese room argument. In John M. Preston & Michael A. Bishop (eds.), Views Into the Chinese Room: New Essays on Searle and Artificial Intelligence. Oxford University Press.   (Cited by 4 | Google | More links)
Abstract: When in 1979 Zenon Pylyshyn, associate editor of Behavioral and Brain Sciences (BBS, a peer commentary journal which I edit) informed me that he had secured a paper by John Searle with the unprepossessing title of [XXXX], I cannot say that I was especially impressed; nor did a quick reading of the brief manuscript -- which seemed to be yet another tedious "Granny Objection"[1] about why/how we are not computers -- do anything to upgrade that impression
Harnad, Stevan (2001). Rights and wrongs of Searle's chinese room argument. In M. Bishop & J. Preston (eds.), Essays on Searle's Chinese Room Argument. Oxford University Press.   (Google | More links)
Abstract: "in an academic generation a little overaddicted to "politesse," it may be worth saying that violent destruction is not necessarily worthless and futile. Even though it leaves doubt about the right road for London, it helps if someone rips up, however violently, a
Harnad, Stevan, Searle's chinese room argument.   (Google)
Abstract: Computationalism. According to computationalism, to explain how the mind works, cognitive science needs to find out what the right computations are -- the same ones that the brain performs in order to generate the mind and its capacities. Once we know that, then every system that performs those computations will have those mental states: Every computer that runs the mind's program will have a mind, because computation is hardware independent : Any hardware that is running the right program has the right computational states
Harnad, Stevan (2001). What's wrong and right about Searle's chinese room argument? In Michael A. Bishop & John M. Preston (eds.), [Book Chapter] (in Press). Oxford University Press.   (Cited by 1 | Google | More links)
Abstract: Searle's Chinese Room Argument showed a fatal flaw in computationalism (the idea that mental states are just computational states) and helped usher in the era of situated robotics and symbol grounding (although Searle himself thought neuroscience was the only correct way to understand the mind)
Hauser, Larry (online). Chinese room argument. Internet Encyclopedia of Philosophy.   (Google)
Hauser, Larry (2003). Nixin' goes to china. In John M. Preston & Michael A. Bishop (eds.), Views Into the Chinese Room: New Essays on Searle and Artificial Intelligence. Oxford University Press.   (Cited by 3 | Google)
Abstract: The intelligent-seeming deeds of computers are what occasion philosophical debate about artificial intelligence (AI) in the first place. Since evidence of AI is not bad, arguments against seem called for. John Searle's Chinese Room Argument (1980a, 1984, 1990, 1994) is among the most famous and long-running would-be answers to the call. Surprisingly, both the original thought experiment (1980a) and Searle's later would-be formalizations of the embedding argument (1984, 1990) are quite unavailing against AI proper (claims that computers do or someday will think ). Searle lately even styles it a "misunderstanding" (1994, p. 547) to think the argument was ever so directed! The Chinese room is now advertised to target Computationalism (claims that computation is what thought essentially is ) exclusively. Despite its renown, the Chinese Room Argument is totally ineffective even against this target
Hauser, Larry (1993). Searle's Chinese Box: The Chinese Room Argument and Artificial Intelligence. Dissertation, University of Michigan   (Cited by 11 | Google)
Hauser, Larry (1997). Searle's chinese box: Debunking the chinese room argument. [Journal (Paginated)] 7 (2):199-226.   (Cited by 17 | Google | More links)
Abstract: John Searle's Chinese room argument is perhaps the most influential and widely cited argument against artificial intelligence (AI). Understood as targeting AI proper -- claims that computers can think or do think -- Searle's argument, despite its rhetorical flash, is logically and scientifically a dud. Advertised as effective against AI proper, the argument, in its main outlines, is an ignoratio elenchi. It musters persuasive force fallaciously by indirection fostered by equivocal deployment of the phrase "strong AI" and reinforced by equivocation on the phrase "causal powers (at least) equal to those of brains." On a more carefully crafted understanding -- understood just to target metaphysical identification of thought with computation ("Functionalism" or "Computationalism") and not AI proper -- the argument is still unsound, though more interestingly so. It's unsound in ways difficult for high church -- "someday my prince of an AI program will come" -- believers in AI to acknowledge without undermining their high church beliefs. The ad hominem bite of Searle's argument against the high church persuasions of so many cognitive scientists, I suggest, largely explains the undeserved repute this really quite disreputable argument enjoys among them
Hauser, Larry (online). Searle's chinese room argument. Field Guide to the Philosophy of Mind.   (Google)
Abstract: John Searle's 1980a) thought experiment and associated 1984a) argument is one of the best known and widely credited counters to claims of artificial intelligence (AI), i.e., to claims that computers _do_ or at least _can_ (roughly, someday will) think. According to Searle's original presentation, the argument is based on two truths: _brains cause minds_ , and _syntax doesn't suffice_ _for semantics_ . Its target, Searle dubs "strong AI": "according to strong AI," according to Searle, "the computer is not merely a tool in the study of the mind, rather the appropriately programmed computer really _is_ a mind in the sense that computers given the right programs can be literally said to _understand_ and have other cognitive states" 1980a, p. 417). Searle contrasts "strong AI" to "weak AI". According to weak AI, according to Searle, computers just
Hauser, Larry (online). The chinese room argument.   (Cited by 6 | Google)
Abstract: _The Chinese room argument_ - John Searle's (1980a) thought experiment and associated (1984) derivation - is one of the best known and widely credited counters to claims of artificial intelligence (AI), i.e., to claims that computers _do_ or at least _can_ (someday might) think. According to Searle's original presentation, the argument is based on two truths: _brains cause minds_ , and _syntax doesn't_ _suffice for semantics_ . Its target, Searle dubs "strong AI": "according to strong AI," according to Searle, "the computer is not merely a tool in the study of the mind, rather the appropriately programmed computer really _is_ a mind in the sense that computers given the right programs can be literally said to _understand_ and have other cognitive states" (1980a, p. 417). Searle contrasts "strong AI" to "weak AI". According to weak AI, according to Searle, computers just
Hayes, Patrick; Harnad, Stevan; Perlis, Donald R. & Block, Ned (1992). Virtual symposium on virtual mind. [Journal (Paginated)] 2 (3):217-238.   (Cited by 21 | Annotation | Google | More links)
Abstract: When certain formal symbol systems (e.g., computer programs) are implemented as dynamic physical symbol systems (e.g., when they are run on a computer) their activity can be interpreted at higher levels (e.g., binary code can be interpreted as LISP, LISP code can be interpreted as English, and English can be interpreted as a meaningful conversation). These higher levels of interpretability are called "virtual" systems. If such a virtual system is interpretable as if it had a mind, is such a "virtual mind" real? This is the question addressed in this "virtual" symposium, originally conducted electronically among four cognitive scientists: Donald Perlis, a computer scientist, argues that according to the computationalist thesis, virtual minds are real and hence Searle's Chinese Room Argument fails, because if Searle memorized and executed a program that could pass the Turing Test in Chinese he would have a second, virtual, Chinese-understanding mind of which he was unaware (as in multiple personality). Stevan Harnad, a psychologist, argues that Searle's Argument is valid, virtual minds are just hermeneutic overinterpretations, and symbols must be grounded in the real world of objects, not just the virtual world of interpretations. Computer scientist Patrick Hayes argues that Searle's Argument fails, but because Searle does not really implement the program: A real implementation must not be homuncular but mindless and mechanical, like a computer. Only then can it give rise to a mind at the virtual level. Philosopher Ned Block suggests that there is no reason a mindful implementation would not be a real one
Hofstadter, Douglas R. (1981). Reflections on Searle. In Douglas R. Hofstadter & Daniel C. Dennett (eds.), The Mind's I. Basic Books.   (Cited by 1 | Annotation | Google)
Jacquette, Dale (1989). Adventures in the chinese room. Philosophy and Phenomenological Research 49 (June):605-23.   (Cited by 5 | Annotation | Google | More links)
Jacquette, Dale (1990). Fear and loathing (and other intentional states) in Searle's chinese room. Philosophical Psychology 3 (2 & 3):287-304.   (Annotation | Google)
Abstract: John R. Searle's problem of the Chinese Room poses an important philosophical challenge to the foundations of strong artificial intelligence, and functionalist, cognitivist, and computationalist theories of mind. Searle has recently responded to three categories of criticisms of the Chinese Room and the consequences he attempts to conclude from it, redescribing the essential features of the problem, and offering new arguments about the syntax-semantics gap it is intended to demonstrate. Despite Searle's defense, the Chinese Room remains ineffective as a counterexample, and poses no real threat to artificial intelligence or mechanist philosophy of mind. The thesis that intentionality is a primitive irreducible relation exemplified by biological phenomena is preferred in opposition to Searle's contrary claim that intentionality is a biological phenomenon exhibiting abstract properties
Jacquette, Dale (1989). Searle's intentionality thesis. Synthese 80 (August):267-75.   (Cited by 1 | Annotation | Google | More links)
Jahren, Neal (1990). Can semantics be syntactic? Synthese 82 (3):309-28.   (Cited by 3 | Annotation | Google | More links)
Abstract:   The author defends John R. Searle's Chinese Room argument against a particular objection made by William J. Rapaport called the Korean Room. Foundational issues such as the relationship of strong AI to human mentality and the adequacy of the Turing Test are discussed. Through undertaking a Gedankenexperiment similar to Searle's but which meets new specifications given by Rapaport for an AI system, the author argues that Rapaport's objection to Searle does not stand and that Rapaport's arguments seem convincing only because they assume the foundations of strong AI at the outset
Kaernbach, C. (2005). No virtual mind in the chinese room. Journal of Consciousness Studies 12 (11):31-42.   (Google | More links)
Kentridge, Robert W. (2001). Computation, chaos and non-deterministic symbolic computation: The chinese room problem solved? Psycoloquy 12 (50).   (Cited by 6 | Google | More links)
King, D. (2001). Entering the chinese room with Castaneda's principle (p). Philosophy Today 45 (2):168-174.   (Google)
Kober, Michael (1998). Kripkenstein meets the chinese room: Looking for the place of meaning from a natural point of view. Inquiry 41 (3):317-332.   (Cited by 2 | Google | More links)
Abstract: The discussion between Searle and the Churchlands over whether or not symbolmanipulating computers generate semantics will be confronted both with the rulesceptical considerations of Kripke/Wittgenstein and with Wittgenstein's privatelanguage argument in order to show that the discussion focuses on the wrong place: meaning does not emerge in the brain. That a symbol means something should rather be conceived as a social fact, depending on a mutual imputation of linguistic competence of the participants of a linguistic practice to one another. The alternative picture will finally be applied to small children, animals, and computers as well
Korb, Kevin B. (1991). Searle's AI program. Journal of Experimental and Theoretical Artificial Intelligence 3:283-96.   (Cited by 6 | Annotation | Google | More links)
Kugel, Peter (2004). The chinese room is a trick. Behavioral and Brain Sciences 27 (1):153-154.   (Google)
Abstract: To convince us that computers cannot have mental states, Searle (1980) imagines a “Chinese room” that simulates a computer that “speaks” Chinese and asks us to find the understanding in the room. It's a trick. There is no understanding in the room, not because computers can't have it, but because the room's computer-simulation is defective. Fix it and understanding appears. Abracadabra!
Law, Diane (online). Searle, subsymbolic functionalism, and synthetic intelligence.   (Cited by 1 | Google | More links)
Leslie, Alan M. & Scholl, Brian J. (1999). Modularity, development and 'theory of mind'. Mind and Language 14 (1).   (Google | More links)
Abstract: Psychologists and philosophers have recently been exploring whether the mechanisms which underlie the acquisition of ‘theory of mind’ (ToM) are best charac- terized as cognitive modules or as developing theories. In this paper, we attempt to clarify what a modular account of ToM entails, and why it is an attractive type of explanation. Intuitions and arguments in this debate often turn on the role of develop- ment: traditional research on ToM focuses on various developmental sequences, whereas cognitive modules are thought to be static and ‘anti-developmental’. We suggest that this mistaken view relies on an overly limited notion of modularity, and we explore how ToM might be grounded in a cognitive module and yet still afford development. Modules must ‘come on-line’, and even fully developed modules may still develop internally, based on their constrained input. We make these points con- crete by focusing on a recent proposal to capture the development of ToM in a module via parameterization
Maloney, J. Christopher (1987). The right stuff. Synthese 70 (March):349-72.   (Cited by 13 | Annotation | Google | More links)
McCarthy, John (online). John Searle's chinese room argument.   (Google)
Abstract: John Searle begins his (1990) ``Consciousness, Explanatory Inversion and Cognitive Science'' with
``Ten years ago in this journal I published an article (Searle, 1980a and 1980b) criticising what I call Strong
AI, the view that for a system to have mental states it is sufficient for the system to implement the right sort of
program with right inputs and outputs. Strong AI is rather easy to refute and the basic argument can be
summarized in one sentence: {it a system, me for example, could implement a program for understanding
Chinese, for example, without understanding any Chinese at all.} This idea, when developed, became
known as the Chinese Room Argument.''
The Chinese Room Argument can be refuted in one sentence
Melnyk, Andrew (1996). Searle's abstract argument against strong AI. Synthese 108 (3):391-419.   (Cited by 6 | Google | More links)
Abstract:   Discussion of Searle's case against strong AI has usually focused upon his Chinese Room thought-experiment. In this paper, however, I expound and then try to refute what I call his abstract argument against strong AI, an argument which turns upon quite general considerations concerning programs, syntax, and semantics, and which seems not to depend on intuitions about the Chinese Room. I claim that this argument fails, since it assumes one particular account of what a program is. I suggest an alternative account which, however, cannot play a role in a Searle-type argument, and argue that Searle gives no good reason for favoring his account, which allows the abstract argument to work, over the alternative, which doesn't. This response to Searle's abstract argument also, incidentally, enables the Robot Reply to the Chinese Room to defend itself against objections Searle makes to it
Mitchell, Ethan (2008). The real Chinese Room. Philica 125.   (Google)
Moor, James H. (1988). The pseudorealization fallacy and the chinese room argument. In James H. Fetzer (ed.), Aspects of AI. D.   (Cited by 5 | Annotation | Google)
Moural, Josef (2003). The chinese room argument. In John Searle. Cambridge: Cambridge University Press.   (Cited by 2 | Google)
Narayanan, Ajit (1991). The chinese room argument. In Logical Foundations. New York: St Martin's Press.   (Google)
Newton, Natika (1989). Machine understanding and the chinese room. Philosophical Psychology 2 (2):207-15.   (Cited by 2 | Annotation | Google)
Abstract: John Searle has argued that one can imagine embodying a machine running any computer program without understanding the symbols, and hence that purely computational processes do not yield understanding. The disagreement this argument has generated stems, I hold, from ambiguity in talk of 'understanding'. The concept is analysed as a relation between subjects and symbols having two components: a formal and an intentional. The central question, then becomes whether a machine could possess the intentional component with or without the formal component. I argue that the intentional state of a symbol's being meaningful to a subject is a functionally definable relation between the symbol and certain past and present states of the subject, and that a machine could bear this relation to a symbol. I sketch a machine which could be said to possess, in primitive form, the intentional component of understanding. Even if the machine, in lacking consciousness, lacks full understanding, it contributes to a theory of understanding and constitutes a counterexample to the Chinese Room argument
Obermeier, K. K. (1983). Wittgenstein on language and artificial intelligence: The chinese-room thought-experiment revisited. Synthese 56 (September):339-50.   (Cited by 1 | Google | More links)
Penrose, Roger (2003). Consciousness, computation, and the chinese room. In John M. Preston & Michael A. Bishop (eds.), Views Into the Chinese Room: New Essays on Searle and Artificial Intelligence. Oxford University Press.   (Cited by 2 | Google)
Pfeifer, Karl (1992). Searle, strong, and two ways of sorting cucumbers. Journal of Philosophical Research 17:347-50.   (Cited by 1 | Google)
Preston, John M. & Bishop, Michael A. (eds.) (2002). Views Into the Chinese Room: New Essays on Searle and Artificial Intelligence. Oxford University Press.   (Cited by 21 | Google)
Abstract: The most famous challenge to computational cognitive science and artificial intelligence is the philosopher John Searle's "Chinese Room" argument.
Proudfoot, Diane (2003). Wittgenstein's anticipation of the chinese room. In John M. Preston & Michael A. Bishop (eds.), Views Into the Chinese Room: New Essays on Searle and Artificial Intelligence. Oxford University Press.   (Google)
Rapaport, William J. (2006). How Helen Keller used syntactic semantics to escape from a chinese room. Minds and Machines 16 (4).   (Google | More links)
Abstract:   A computer can come to understand natural language the same way Helen Keller did: by using “syntactic semantics”—a theory of how syntax can suffice for semantics, i.e., how semantics for natural language can be provided by means of computational symbol manipulation. This essay considers real-life approximations of Chinese Rooms, focusing on Helen Keller’s experiences growing up deaf and blind, locked in a sort of Chinese Room yet learning how to communicate with the outside world. Using the SNePS computational knowledge-representation system, the essay analyzes Keller’s belief that learning that “everything has a name” was the key to her success, enabling her to “partition” her mental concepts into mental representations of: words, objects, and the naming relations between them. It next looks at Herbert Terrace’s theory of naming, which is akin to Keller’s, and which only humans are supposed to be capable of. The essay suggests that computers at least, and perhaps non-human primates, are also capable of this kind of naming
Rapaport, William J. (1984). Searle's experiments with thought. Philosophy of Science 53 (June):271-9.   (Cited by 14 | Annotation | Google | More links)
Rey, Georges (2003). Searle's misunderstandings of functionalism and strong AI. In John M. Preston & Michael A. Bishop (eds.), Views Into the Chinese Room: New Essays on Searle and Artificial Intelligence. Oxford University Press.   (Google)
Rey, Georges (1986). What's really going on in Searle's 'chinese room'. Philosophical Studies 50 (September):169-85.   (Cited by 17 | Annotation | Google | More links)
Roberts, Lawrence D. (1990). Searle's extension of the chinese room to connectionist machines. Journal of Experimental and Theoretical Artificial Intelligence 2:185-7.   (Cited by 4 | Annotation | Google)
Rodych, Victor (2003). Searle Freed of every flaw. Acta Analytica 18 (30-31):161-175.   (Google | More links)
Abstract: Strong Al presupposes (1) that Super-Searle (henceforth ‘Searle’) comes to know that the symbols he manipulates are meaningful , and (2) that there cannot be two or more semantical interpretations for the system of symbols that Searle manipulates such that the set of rules constitutes a language comprehension program for each interpretation. In this paper, I show that Strong Al is false and that presupposition #1 is false, on the assumption that presupposition #2 is true. The main argument of the paper constructs a second program, isomorphic to Searle’s, to show that if someone, say Dan, runs this isomorphic program, he cannot possibly come to know what its mentioned symbols mean because they do not mean anything to anybody. Since Dan and Searle do exactly the same thing, except that the symbols they manipulate are different, neither Dan nor Searle can possibly know whether the symbols they manipulate are meaningful (let alone what they mean, if they are meaningful). The remainder of the paper responds to an anticipated Strong Al rejoinder, which, I believe, is a necessary extension of Strong Al
Russow, L-M. (1984). Unlocking the chinese room. Nature and System 6 (December):221-8.   (Cited by 4 | Annotation | Google)
Searle, John R. (1990). Is the brain's mind a computer program? Scientific American 262 (1):26-31.   (Cited by 178 | Annotation | Google | More links)
Searle, John R. (1987). Minds and brains without programs. In Colin Blakemore (ed.), Mindwaves. Blackwell.   (Cited by 27 | Annotation | Google)
Searle, John R. (1980). Minds, brains and programs. Behavioral and Brain Sciences 3:417-57.   (Cited by 1532 | Annotation | Google | More links)
Abstract: What psychological and philosophical significance should we attach to recent efforts at computer simulations of human cognitive capacities? In answering this question, I find it useful to distinguish what I will call "strong" AI from "weak" or "cautious" AI (artificial intelligence). According to weak AI, the principal value of the computer in the study of the mind is that it gives us a very powerful tool. For example, it enables us to formulate and test hypotheses in a more rigorous and precise fashion. But according to strong AI, the computer is not merely a tool in the study of the mind; rather, the appropriately programmed computer really is a mind, in the sense that computers given the right programs can be literally said to..
Searle, John R. (1984). Minds, Brains and Science. Harvard University Press.   (Cited by 515 | Annotation | Google)
Searle, John R. (1989). Reply to Jacquette. Philosophy and Phenomenological Research 49 (4):701-8.   (Cited by 4 | Annotation | Google | More links)
Searle, John R. (1989). Reply to Jacquette's adventures in the chinese room. Philosophy and Phenomenological Research 49 (June):701-707.   (Google)
Searle, John R. (2002). Twenty-one years in the chinese room. In John M. Preston & Michael A. Bishop (eds.), Views Into the Chinese Room: New Essays on Searle and Artificial Intelligence. Oxford University Press.   (Cited by 7 | Google)
Seidel, Asher (1989). Chinese rooms a, B and C. Pacific Philosophical Quarterly 20 (June):167-73.   (Cited by 1 | Annotation | Google)
Seidel, Asher (1988). Searle on the biological basis of cognition. Analysis 48 (January):26-28.   (Google)
Shaffer, Michael J. (2009). A logical hole in the chinese room. Minds and Machines 19 (2):229-235.   (Google)
Abstract: Searle’s Chinese Room Argument (CRA) has been the object of great interest in the philosophy of mind, artificial intelligence and cognitive science since its initial presentation in ‘Minds, Brains and Programs’ in 1980. It is by no means an overstatement to assert that it has been a main focus of attention for philosophers and computer scientists of many stripes. It is then especially interesting to note that relatively little has been said about the detailed logic of the argument, whatever significance Searle intended CRA to have. The problem with the CRA is that it involves a very strong modal claim, the truth of which is both unproved and highly questionable. So it will be argued here that the CRA does not prove what it was intended to prove
Sharvy, Richard (1985). Searle on programs and intentionality. Canadian Journal of Philosophy 11:39-54.   (Annotation | Google)
Simon, Herbert A. & Eisenstadt, Stuart A. (2003). A chinese room that understands. In John M. Preston & Michael A. Bishop (eds.), Views Into the Chinese Room: New Essays on Searle and Artificial Intelligence. Oxford University Press.   (Cited by 3 | Google)
Sloman, Aaron (1986). Did Searle attack strong strong AI or weak strong AI? In Artificial Intelligence and its Applications. Chichester.   (Cited by 3 | Google | More links)
Sprevak, Mark D. (online). Algorithms and the chinese room.   (Google)
Suits, David B. (1989). Out of the chinese room. Computing and Philosophy Newsletter 4:1-7.   (Cited by 2 | Annotation | Google)
Tanaka, Koji (2004). Minds, programs, and chinese philosophers: A chinese perspective on the chinese room. Sophia 43 (1):61-72.   (Google)
Abstract: The paper is concerned with John Searle’s famous Chinese room argument. Despite being objected to by some, Searle’s Chinese room argument appears very appealing. This is because Searle’s argument is based on an intuition about the mind that ‘we’ all seem to share. Ironically, however, Chinese philosophers don’t seem to share this same intuition. The paper begins by first analysing Searle’s Chinee room argument. It then introduces what can be seen as the (implicit) Chinese view of the mind. Lastly, it demonstrates a conceptual difference between Chinese and Western philosophy with respect to the notion of mind. Thus, it is shown that one must carefully attend to the presuppositions underlying Chinese philosophising in interpreting Chinese philosophers
Taylor, John G. (2003). Do virtual actions avoid the chinese room? In John M. Preston & Michael A. Bishop (eds.), Views Into the Chinese Room: New Essays on Searle and Artificial Intelligence. Oxford University Press.   (Cited by 4 | Google)
Teng, Norman Y. (2000). A cognitive analysis of the chinese room argument. Philosophical Psychology 13 (3):313-24.   (Cited by 1 | Google | More links)
Abstract: Searle's Chinese room argument is analyzed from a cognitive point of view. The analysis is based on a newly developed model of conceptual integration, the many space model proposed by Fauconnier and Turner. The main point of the analysis is that the central inference constructed in the Chinese room scenario is a result of a dynamic, cognitive activity of conceptual blending, with metaphor defining the basic features of the blending. Two important consequences follow: (1) Searle's recent contention that syntax is not intrinsic to physics turns out to be a slightly modified version of the old Chinese room argument; and (2) the argument itself is still open to debate. It is persuasive but not conclusive, and at bottom it is a topological mismatch in the metaphoric conceptual integration that is responsible for the non-conclusive character of the Chinese room argument
Thagard, Paul R. (1986). The emergence of meaning: An escape from Searle's chinese room. Behaviorism 14 (3):139-46.   (Cited by 5 | Annotation | Google)
Wakefield, Jerome C. (2003). The chinese room argument reconsidered: Essentialism, indeterminacy, and strong AI. Minds and Machines 13 (2):285-319.   (Cited by 3 | Google | More links)
Abstract:   I argue that John Searle's (1980) influential Chinese room argument (CRA) against computationalism and strong AI survives existing objections, including Block's (1998) internalized systems reply, Fodor's (1991b) deviant causal chain reply, and Hauser's (1997) unconscious content reply. However, a new ``essentialist'' reply I construct shows that the CRA as presented by Searle is an unsound argument that relies on a question-begging appeal to intuition. My diagnosis of the CRA relies on an interpretation of computationalism as a scientific theory about the essential nature of intentional content; such theories often yield non-intuitive results in non-standard cases, and so cannot be judged by such intuitions. However, I further argue that the CRA can be transformed into a potentially valid argument against computationalism simply by reinterpreting it as an indeterminacy argument that shows that computationalism cannot explain the ordinary distinction between semantic content and sheer syntactic manipulation, and thus cannot be an adequate account of content. This conclusion admittedly rests on the arguable but plausible assumption that thought content is interestingly determinate. I conclude that the viability of computationalism and strong AI depends on their addressing the indeterminacy objection, but that it is currently unclear how this objection can be successfully addressed
Warwick, Kevin (2002). Alien encounters. In Views Into the Chinese Room: New Essays on Searle and Artificial Intelligence. Oxford: Clarendon Press.   (Google)
Weiss, Timothy (1990). Closing the chinese room. Ratio 3 (2):165-81.   (Cited by 6 | Annotation | Google | More links)
Wheeler, M. (2003). Changes in the rules: Computers, dynamic systems, and Searle. In John M. Preston & Michael A. Bishop (eds.), Views Into the Chinese Room: New Essays on Searle and Artificial Intelligence. Oxford University Press.   (Google)
Whitmer, J. M. (1983). Intentionality, artificial intelligence, and the causal powers of the brain. Auslegung 10:194-210.   (Annotation | Google)