Javascript Menu by Deluxe-Menu.com
MindPapers is now part of PhilPapers: online research in philosophy, a new service with many more features.
 
 Compiled by David Chalmers (Editor) & David Bourget (Assistant Editor), Australian National University. Submit an entry.
 
   
click here for help on how to search

6. Philosophy of Artificial Intelligence (Philosophy of Artificial Intelligence on PhilPapers)

  •  The Singularity [9]
  •  Mind Uploading [1]
  • 6.5 Computationalism [103]
  • 6.6 Philosophy of AI, Miscellaneous [83]
  • Broadbent, Donald E. (ed.) (1993). The Simulation of Human Intelligence. Blackwell.   (Google)
    Cloos, Christopher (2005). The Utilibot Project: An Autonomous Mobile Robot Based on Utilitarianism. In Anderson Michael, Anderson Susan & Armen Chris (eds.), AAAI Fall Symposium.   (Google)
    Abstract: As autonomous mobile robots (AMRs) begin living in the home, performing service tasks and assisting with daily activities, their actions will have profound ethical implications. Consequently, AMRs need to be outfitted with the ability to act morally with regard to human life and safety. Yet, in the area of robotics where morality is a relevant field of endeavor (i.e. human-robot interaction) the sub-discipline of morality does not exist. In response, the Utilibot project seeks to provide a point of initiation for the implementation of ethics in an AMR. The Utilibot is a decision-theoretic AMR guided by the utilitarian notion of the maximization of human well-being. The core ethical decision-making capacity of the Utilibot consists of two dynamic Bayesian networks that model human and environmental health, a dynamic decision network that accounts for decisions and utilities, and a Markov decision process (MDP) that decomposes the planning problem to solve for the optimal course of action to maximize human safety and well-being.
    Goel, Vinod (1994). Book reviews. Philosophia Mathematica 2 (1).   (Google)
    Harnad, Stevan (1984). Verifying machines' minds. Contemporary Psychology 29:389 - 391.   (Google)
    Abstract: he question of the possibility of artificial consciousness is both very new and very old. It is new in the context of contemporary cognitive science and its concern with whether a machine can be conscious; it is old in the form of the mind/body problem and the "other minds" problem of philosophy. Contemporary enthusiasts proceed at their peril if they ignore or are ignorant of the false starts and blind alleys that the older thinkers have painfully worked through
    Morreau, Michael & Kraus, Sarit (1998). Syntactical Treatments of Propositional Attitudes. Artificial Intelligence 106 (1):161-177.   (Google)
    Abstract: Syntactical treatments of propositional attitudes are attractive to artificial intelligence researchers. But results of Montague (1974) and Thomason (1980) seem to show that syntactical treatments are not viable. They show that if representation languages are sufficiently expressive, then axiom schemes characterizing knowledge and belief give rise to paradox. Des Rivières and Levesque (1988) characterize a class of sentences within which these schemes can safely be instantiated. These sentences do not quantify over the propositional objects of knowledge and belief. We argue that their solution is incomplete, and extend it by characterizing a more inclusive class of sentences over which the axiom schemes can safely range. Our sentences do quantify over propositional objects.
    Páez, Andrés (2009). Artificial explanations: The epistemological interpretation of explanation in ai. Synthese 170 (1).   (Google)
    Abstract: In this paper I critically examine the notion of explanation used in artificial intelligence in general, and in the theory of belief revision in particular. I focus on two of the best known accounts in the literature: Pagnucco’s abductive expansion functions and Gärdenfors’ counterfactual analysis. I argue that both accounts are at odds with the way in which this notion has historically been understood in philosophy. They are also at odds with the explanatory strategies used in actual scientific practice. At the end of the paper I outline a set of desiderata for an epistemologically motivated, scientifically informed belief revision model for explanation

    6.1 Can Machines Think?

    6.1a The Turing Test

    Akman, Varol & Blackburn, Patrick (2000). Editorial: Alan Turing and artificial intelligence. Journal of Logic, Language and Information 9 (4):391-395.   (Cited by 2 | Google | More links)
    Abstract: Department of Computer Engineering, Bilkent University, 06533 Ankara, Turkey E-mail: akman@cs.bilkent.edu.tr; http://www.cs.bilkent.edu.tr/?akman..
    Alper, G. (1990). A psychoanalyst takes the Turing test. Psychoanalytic Review 77:59-68.   (Cited by 6 | Google)
    Barresi, John (1987). Prospects for the cyberiad: Certain limits on human self-knowledge in the cybernetic age. Journal for the Theory of Social Behavior 17 (March):19-46.   (Cited by 6 | Google | More links)
    Beenfeldt, Christian (2006). The Turing test: An examination of its nature and its mentalistic ontology. Danish Yearbook of Philosophy 40:109-144.   (Google)
    Ben-Yami, Hanoch (2005). Behaviorism and psychologism: Why Block's argument against behaviorism is unsound. Philosophical Psychology 18 (2):179-186.   (Cited by 1 | Google | More links)
    Abstract: Ned Block ((1981). Psychologism and behaviorism. Philosophical Review, 90, 5-43.) argued that a behaviorist conception of intelligence is mistaken, and that the nature of an agent's internal processes is relevant for determining whether the agent has intelligence. He did that by describing a machine which lacks intelligence, yet can answer questions put to it as an intelligent person would. The nature of his machine's internal processes, he concluded, is relevant for determining that it lacks intelligence. I argue against Block that it is not the nature of its processes but of its linguistic behavior which is responsible for his machine's lack of intelligence. As I show, not only has Block failed to establish that the nature of internal processes is conceptually relevant for psychology, in fact his machine example actually supports some version of behaviorism. As Wittgenstein has maintained, as far as psychology is concerned, there may be chaos inside
    Block, Ned (1981). Psychologism and behaviorism. Philosophical Review 90 (1):5-43.   (Cited by 88 | Annotation | Google | More links)
    Abstract: Let psychologism be the doctrine that whether behavior is intelligent behavior depends on the character of the internal information processing that produces it. More specifically, I mean psychologism to involve the doctrine that two systems could have actual and potential behavior _typical_ of familiar intelligent beings, that the two systems could be exactly alike in their actual and potential behavior, and in their behavioral dispositions and capacities and counterfactual behavioral properties (i.e., what behaviors, behavioral dispositions, and behavioral capacities they would have exhibited had their stimuli differed)--the two systems could be alike in all these ways, yet there could be a difference in the information processing that mediates their stimuli and responses that determines that one is not at all intelligent while the other is fully intelligent
    Bringsjord, Selmer (2000). Animals, zombanimals, and the total Turing test: The essence of artificial intelligence. Journal of Logic Language and Information 9 (4):397-418.   (Cited by 32 | Google | More links)
    Bringsjord, Selmer; Caporale, Clarke & Noel, Ron (2000). Animals, zombanimals, and the total Turing test. Journal of Logic, Language and Information 9 (4).   (Google)
    Abstract: Alan Turing devised his famous test (TT) through a slight modificationof the parlor game in which a judge tries to ascertain the gender of twopeople who are only linguistically accessible. Stevan Harnad hasintroduced the Total TT, in which the judge can look at thecontestants in an attempt to determine which is a robot and which aperson. But what if we confront the judge with an animal, and arobot striving to pass for one, and then challenge him to peg which iswhich? Now we can index TTT to a particular animal and its syntheticcorrelate. We might therefore have TTTrat, TTTcat,TTTdog, and so on. These tests, as we explain herein, are abetter barometer of artificial intelligence (AI) than Turing's originalTT, because AI seems to have ammunition sufficient only to reach thelevel of artificial animal, not artificial person
    Bringsjord, Selmer; Bello, P. & Ferrucci, David A. (2001). Creativity, the Turing test, and the (better) Lovelace test. Minds and Machines 11 (1):3-27.   (Cited by 11 | Google | More links)
    Abstract:   The Turing Test (TT) is claimed by many to be a way to test for the presence, in computers, of such ``deep'' phenomena as thought and consciousness. Unfortunately, attempts to build computational systems able to pass TT (or at least restricted versions of this test) have devolved into shallow symbol manipulation designed to, by hook or by crook, trick. The human creators of such systems know all too well that they have merely tried to fool those people who interact with their systems into believing that these systems really have minds. And the problem is fundamental: the structure of the TT is such as to cultivate tricksters. A better test is one that insists on a certain restrictive epistemic relation between an artificial agent (or system) A, its output o, and the human architect H of A – a relation which, roughly speaking, obtains when H cannot account for how A produced o. We call this test the ``Lovelace Test'' in honor of Lady Lovelace, who believed that only when computers originate things should they be believed to have minds
    Clark, Thomas W. (1992). The Turing test as a novel form of hermeneutics. International Studies in Philosophy 24 (1):17-31.   (Cited by 6 | Google)
    Clifton, Andrew (ms). Blind man's bluff and the Turing test.   (Google)
    Abstract: It seems plausible that under the conditions of the Turing test, congenitally blind people could nevertheless, with sufficient preparation, successfully represent themselves to remotely located interrogators as sighted. Having never experienced normal visual sensations, the successful blind player can prevail in this test only by playing a ‘lying game’—imitating the phenomenological claims of sighted people, in the absence of the qualitative visual experiences to which such statements purportedly refer. This suggests that a computer or robot might pass the Turing test in the same way, in the absence not only of visual experience, but qualitative consciousness in general. Hence, the standard Turing test does not provide a valid criterion for the presence of consciousness. A ‘sensorimetric’ version of the Turing test fares no better, for the apparent correlations we observe between cognitive functions and qualitative conscious experiences seems to be contingent, not necessary. We must therefore define consciousness not in terms of its causes and effects, but rather, in terms of the distinctive properties of its content, such as its possession of qualitative character and apparent intrinsic value—the property which confers upon consciousness its moral significance. As a means of determining whether or nor a machine is conscious, in this sense, an alternative to the standard Turing test is proposed
    Copeland, B. Jack (2000). The Turing test. Minds and Machines 10 (4):519-539.   (Cited by 7 | Google | More links)
    Abstract:   Turing''s test has been much misunderstood. Recently unpublished material by Turing casts fresh light on his thinking and dispels a number of philosophical myths concerning the Turing test. Properly understood, the Turing test withstands objections that are popularly believed to be fatal
    Cowen, Tyler & Dawson, Michelle, What does the Turing test really mean? And how many human beings (including Turing) could pass?   (Google)
    Abstract: The so-called Turing test, as it is usually interpreted, sets a benchmark standard for determining when we might call a machine intelligent. We can call a machine intelligent if the following is satisfied: if a group of wise observers were conversing with a machine through an exchange of typed messages, those observers could not tell whether they were talking to a human being or to a machine. To pass the test, the machine has to be intelligent but it also should be responsive in a manner which cannot be distinguished from a human being. This standard interpretation presents the Turing test as a criterion for demarcating intelligent from non-intelligent entities. For a long time proponents of artificial intelligence have taken the Turing test as a goalpost for measuring progress
    Crawford, C. (1994). Notes on the Turing test. Communications of the Association for Computing Machinery 37 (June):13-15.   (Google)
    Crockett, L. (1994). The Turing Test and the Frame Problem: AI's Mistaken Understanding of Intelligence. Ablex.   (Cited by 19 | Google)
    Abstract: I have discussed the frame problem and the Turing test at length, but I have not attempted to spell out what I think the implications of the frame problem ...
    Cutrona, Jr (ms). Zombies in Searle's chinese room: Putting the Turing test to bed.   (Google | More links)
    Abstract: Searle’s discussions over the years 1980-2004 of the implications of his “Chinese Room” Gedanken experiment are frustrating because they proceed from a correct assertion: (1) “Instantiating a computer program is never by itself a sufficient condition of intentionality;” and an incorrect assertion: (2) “The explanation of how the brain produces intentionality cannot be that it does it by instantiating a computer program.” In this article, I describe how to construct a Gedanken zombie Chinese Room program that will pass the Turing test and at the same time unambiguously demonstrates the correctness of (1). I then describe how to construct a Gedanken Chinese brain program that will pass the Turing test, has a mind, and understands Chinese, thus demonstrating that (2) is incorrect. Searle’s instantiation of this program can and does produce intentionality. Searle’s longstanding ignorance of Chinese is simply irrelevant and always has been. I propose a truce and a plan for further exploration
    Davidson, Donald (1990). Turing's test. In K. Said (ed.), Modelling the Mind. Oxford University Press.   (Google)
    Dennett, Daniel C. (1984). Can machines think? In M. G. Shafto (ed.), How We Know. Harper & Row.   (Cited by 24 | Annotation | Google)
    Drozdek, Adam (2001). Descartes' Turing test. Epistemologia 24 (1):5-29.   (Google)
    Edmonds, Bruce (2000). The constructability of artificial intelligence (as defined by the Turing test). Journal of Logic Language and Information 9 (4):419-424.   (Google | More links)
    Abstract: The Turing Test (TT), as originally specified, centres on theability to perform a social role. The TT can be seen as a test of anability to enter into normal human social dynamics. In this light itseems unlikely that such an entity can be wholly designed in anoff-line mode; rather a considerable period of training insitu would be required. The argument that since we can pass the TT,and our cognitive processes might be implemented as a Turing Machine(TM), that consequently a TM that could pass the TT could be built, isattacked on the grounds that not all TMs are constructible in a plannedway. This observation points towards the importance of developmentalprocesses that use random elements (e.g., evolution), but in these casesit becomes problematic to call the result artificial. This hasimplications for the means by which intelligent agents could bedeveloped
    Edmonds, B. (ms). The constructability of artificial intelligence (as defined by the Turing test).   (Google | More links)
    Abstract: The Turing Test, as originally specified, centres on the ability to perform a social role. The TT can seen as a test of an ability to enter into normal human social dynamics. In this light it seems unlikely that such an entity can be wholly designed in an `off-line' mode, but rather a considerable period of training in situ would be required. The argument that since we can pass the TT and our cognitive processes might be implemented as a TM that, in theory, an TM that could pass the TT could be built is attacked on the grounds that not all TMs are constructable in a planned way. This observation points towards the importance of developmental processes that include random elements (e.g. evolution), but in these cases it becomes problematic to call the result artificial
    Erion, Gerald J. (2001). The cartesian test for automatism. Minds and Machines 11 (1):29-39.   (Cited by 5 | Google | More links)
    Abstract:   In Part V of his Discourse on the Method, Descartes introduces a test for distinguishing people from machines that is similar to the one proposed much later by Alan Turing. The Cartesian test combines two distinct elements that Keith Gunderson has labeled the language test and the action test. Though traditional interpretation holds that the action test attempts to determine whether an agent is acting upon principles, I argue that the action test is best understood as a test of common sense. I also maintain that this interpretation yields a stronger test than Turing's, and that contemporary artificial intelligence should consider using it as a guide for future research
    Floridi, Luciano (2005). Consciousness, agents and the knowledge game. Minds and Machines 15 (3):415-444.   (Cited by 2 | Google | More links)
    Abstract: This paper has three goals. The first is to introduce the “knowledge game”, a new, simple and yet powerful tool for analysing some intriguing philosophical questions. The second is to apply the knowledge game as an informative test to discriminate between conscious (human) and conscious-less agents (zombies and robots), depending on which version of the game they can win. And the third is to use a version of the knowledge game to provide an answer to Dretske’s question “how do you know you are not a zombie?”
    Floridi, Luciano & Taddeo, Mariarosaria (2009). Turing's imitation game: Still an impossible challenge for all machines and some judges––an evaluation of the 2008 loebner contest. Minds and Machines 19 (1).   (Google)
    Abstract: An evaluation of the 2008 Loebner contest
    Floridi, Luciano; Taddeo, Mariarosaria & Turilli, Matteo (2008). Turing’s Imitation Game: Still an Impossible Challenge for All Machines and Some Judges. Minds and Machines 19 (1):145-150.   (Google)
    Abstract: An Evaluation of the 2008 Loebner Contest.
    French, Robert M. (2000). Peeking behind the screen: The unsuspected power of the standard Turing test. Journal of Experimental and Theoretical Artificial Intelligence 12 (3):331-340.   (Cited by 10 | Google | More links)
    Abstract: No computer that had not experienced the world as we humans had could pass a rigorously administered standard Turing Test. We show that the use of “subcognitive” questions allows the standard Turing Test to indirectly probe the human subcognitive associative concept network built up over a lifetime of experience with the world. Not only can this probing reveal differences in cognitive abilities, but crucially, even differences in _physical aspects_ of the candidates can be detected. Consequently, it is unnecessary to propose even harder versions of the Test in which all physical and behavioral aspects of the two candidates had to be indistinguishable before allowing the machine to pass the Test. Any machine that passed the “simpler” symbols- in/symbols-out test as originally proposed by Turing would be intelligent. The problem is that, even in its original form, the Turing Test is already too hard and too anthropocentric for any machine that was not a physical, social, and behavioral carbon copy of ourselves to actually pass it. Consequently, the Turing Test, even in its standard version, is not a reasonable test for general machine intelligence. There is no need for an even stronger version of the Test
    French, Robert M. (1995). Refocusing the debate on the Turing test: A response. Behavior and Philosophy 23 (1):59-60.   (Cited by 3 | Annotation | Google)
    French, Robert M. (1990). Subcognition and the limits of the Turing test. Mind 99 (393):53-66.   (Cited by 66 | Annotation | Google | More links)
    French, Robert (1996). The inverted Turing test: How a mindless program could pass it. Psycoloquy 7 (39).   (Cited by 5 | Google | More links)
    Abstract: This commentary attempts to show that the inverted Turing Test (Watt 1996) could be simulated by a standard Turing test and, most importantly, claims that a very simple program with no intelligence whatsoever could be written that would pass the inverted Turing test. For this reason, the inverted Turing test in its present form must be rejected
    French, Robert (2000). The Turing test: The first fifty years. Trends in Cognitive Sciences 4 (3):115-121.   (Cited by 15 | Google | More links)
    Abstract: The Turing Test, originally proposed as a simple operational definition of intelligence, has now been with us for exactly half a century. It is safe to say that no other single article in computer science, and few other articles in science in general, have generated so much discussion. The present article chronicles the comments and controversy surrounding Turing's classic article from its publication to the present. The changing perception of the Turing Test over the last fifty years has paralleled the changing attitudes in the scientific community towards artificial intelligence: from the unbridled optimism of 1960's to the current realization of the immense difficulties that still lie ahead. I conclude with the prediction that the Turing Test will remain important, not only as a landmark in the history of the development of intelligent machines, but also with real relevance to future generations of people living in a world in which the cognitive capacities of machines will be vastly greater than they are now
    Gunderson, Keith (1964). The imitation game. Mind 73 (April):234-45.   (Cited by 13 | Annotation | Google | More links)
    Harnad, Stevan & Dror, Itiel (2006). Distributed cognition: Cognizing, autonomy and the Turing test. Pragmatics and Cognition 14 (2):14.   (Cited by 2 | Google | More links)
    Abstract: Some of the papers in this special issue distribute cognition between what is going on inside individual cognizers' heads and their outside worlds; others distribute cognition among different individual cognizers. Turing's criterion for cognition was individual, autonomous input/output capacity. It is not clear that distributed cognition could pass the Turing Test
    Harnad, Stevan (1995). Does mind piggyback on robotic and symbolic capacity? In H. Morowitz & J. Singer (eds.), The Mind, the Brain, and Complex Adaptive Systems. Addison Wesley.   (Google)
    Abstract: Cognitive science is a form of "reverse engineering" (as Dennett has dubbed it). We are trying to explain the mind by building (or explaining the functional principles of) systems that have minds. A "Turing" hierarchy of empirical constraints can be applied to this task, from t1, toy models that capture only an arbitrary fragment of our performance capacity, to T2, the standard "pen-pal" Turing Test (total symbolic capacity), to T3, the Total Turing Test (total symbolic plus robotic capacity), to T4 (T3 plus internal [neuromolecular] indistinguishability). All scientific theories are underdetermined by data. What is the right level of empirical constraint for cognitive theory? I will argue that T2 is underconstrained (because of the Symbol Grounding Problem and Searle's Chinese Room Argument) and that T4 is overconstrained (because we don't know what neural data, if any, are relevant). T3 is the level at which we solve the "other minds" problem in everyday life, the one at which evolution operates (the Blind Watchmaker is no mind-reader either) and the one at which symbol systems can be grounded in the robotic capacity to name and manipulate the objects their symbols are about. I will illustrate this with a toy model for an important component of T3 -- categorization -- using neural nets that learn category invariance by "warping" similarity space the way it is warped in human categorical perception: within-category similarities are amplified and between-category similarities are attenuated. This analog "shape" constraint is the grounding inherited by the arbitrarily shaped symbol that names the category and by all the symbol combinations it enters into. No matter how tightly one constrains any such model, however, it will always be more underdetermined than normal scientific and engineering theory. This will remain the ineliminable legacy of the mind/body problem
    Harnad, Stevan (1994). Levels of functional equivalence in reverse bioengineering: The Darwinian Turing test for artificial life. Artificial Life 1 (3):93-301.   (Cited by 35 | Google | More links)
    Abstract: Both Artificial Life and Artificial Mind are branches of what Dennett has called "reverse engineering": Ordinary engineering attempts to build systems to meet certain functional specifications, reverse bioengineering attempts to understand how systems that have already been built by the Blind Watchmaker work. Computational modelling (virtual life) can capture the formal principles of life, perhaps predict and explain it completely, but it can no more be alive than a virtual forest fire can be hot. In itself, a computational model is just an ungrounded symbol system; no matter how closely it matches the properties of what is being modelled, it matches them only formally, with the mediation of an interpretation. Synthetic life is not open to this objection, but it is still an open question how close a functional equivalence is needed in order to capture life. Close enough to fool the Blind Watchmaker is probably close enough, but would that require molecular indistinguishability, and if so, do we really need to go that far?
    Harnad, Stevan (1991). Other bodies, other minds: A machine incarnation of an old philosophical problem. [Journal (Paginated)] 1 (1):43-54.   (Cited by 99 | Annotation | Google | More links)
    Abstract: Explaining the mind by building machines with minds runs into the other-minds problem: How can we tell whether any body other than our own has a mind when the only way to know is by being the other body? In practice we all use some form of Turing Test: If it can do everything a body with a mind can do such that we can't tell them apart, we have no basis for doubting it has a mind. But what is "everything" a body with a mind can do? Turing's original "pen-pal" version (the TT) only tested linguistic capacity, but Searle has shown that a mindless symbol-manipulator could pass the TT undetected. The Total Turing Test (TTT) calls for all of our linguistic and robotic capacities; immune to Searle's argument, it suggests how to ground a symbol manipulating system in the capacity to pick out the objects its symbols refer to. No Turing Test, however, can guarantee that a body has a mind. Worse, nothing in the explanation of its successful performance requires a model to have a mind at all. Minds are hence very different from the unobservables of physics (e.g., superstrings); and Turing Testing, though essential for machine-modeling the mind, can really only yield an explanation of the body
    Harnad, Stevan (2006). The annotation game: On Turing (1950) on computing, machinery, and intelligence. In Robert Epstein & G. Peters (eds.), [Book Chapter] (in Press). Kluwer.   (Cited by 5 | Google | More links)
    Abstract: This quote/commented critique of Turing's classical paper suggests that Turing meant -- or should have meant -- the robotic version of the Turing Test (and not just the email version). Moreover, any dynamic system (that we design and understand) can be a candidate, not just a computational one. Turing also dismisses the other-minds problem and the mind/body problem too quickly. They are at the heart of both the problem he is addressing and the solution he is proposing
    Harnad, Stevan (2006). The annotation game: On Turing (1950) on computing, machinery, and intelligence. In Robert Epstein & Grace Peters (eds.), [Book Chapter] (in Press). Kluwer.   (Cited by 5 | Google | More links)
    Abstract: This quote/commented critique of Turing's classical paper suggests that Turing meant -- or should have meant -- the robotic version of the Turing Test (and not just the email version). Moreover, any dynamic system (that we design and understand) can be a candidate, not just a computational one. Turing also dismisses the other-minds problem and the mind/body problem too quickly. They are at the heart of both the problem he is addressing and the solution he is proposing
    Harnad, Stevan (1999). Turing on reverse-engineering the mind. Journal of Logic, Language, and Information.   (Cited by 4 | Google)
    Harnad, Stevan (1992). The Turing test is not a trick: Turing indistinguishability is a scientific criterion. [Journal (Paginated)] 3 (4):9-10.   (Cited by 44 | Google | More links)
    Abstract: It is important to understand that the Turing Test (TT) is not, nor was it intended to be, a trick; how well one can fool someone is not a measure of scientific progress. The TT is an empirical criterion: It sets AI's empirical goal to be to generate human-scale performance capacity. This goal will be met when the candidate's performance is totally indistinguishable from a human's. Until then, the TT simply represents what it is that AI must endeavor eventually to accomplish scientifically
    Hauser, Larry (2001). Look who's moving the goal posts now. Minds and Machines 11 (1):41-51.   (Cited by 2 | Google | More links)
    Abstract:   The abject failure of Turing's first prediction (of computer success in playing the Imitation Game) confirms the aptness of the Imitation Game test as a test of human level intelligence. It especially belies fears that the test is too easy. At the same time, this failure disconfirms expectations that human level artificial intelligence will be forthcoming any time soon. On the other hand, the success of Turing's second prediction (that acknowledgment of computer thought processes would become commonplace) in practice amply confirms the thought that computers think in some manner and are possessed of some level of intelligence already. This lends ever-growing support to the hypothesis that computers will think at a human level eventually, despite the abject failure of Turing's first prediction
    Hauser, Larry (1993). Reaping the whirlwind: Reply to Harnad's Other Bodies, Other Minds. Minds and Machines 3 (2):219-37.   (Cited by 18 | Google | More links)
    Abstract:   Harnad''s proposed robotic upgrade of Turing''s Test (TT), from a test of linguistic capacity alone to a Total Turing Test (TTT) of linguisticand sensorimotor capacity, conflicts with his claim that no behavioral test provides even probable warrant for attributions of thought because there is no evidence of consciousness besides private experience. Intuitive, scientific, and philosophical considerations Harnad offers in favor of his proposed upgrade are unconvincing. I agree with Harnad that distinguishing real from as if thought on the basis of (presence or lack of) consciousness (thus rejecting Turing (behavioral) testing as sufficient warrant for mental attribution)has the skeptical consequence Harnad accepts — there is in factno evidence for me that anyone else but me has a mind. I disagree with hisacceptance of it! It would be better to give up the neo-Cartesian faith in private conscious experience underlying Harnad''s allegiance to Searle''s controversial Chinese Room Experiment than give up all claim to know others think. It would be better to allow that (passing) Turing''s Test evidences — evenstrongly evidences — thought
    Hayes, Patrick & Ford, Kenneth M. (1995). Turing test considered harmful. Proceedings of the Fourteenth International Joint Conference on Artificial Intelligence 1:972-77.   (Cited by 26 | Google)
    Hernandez-Orallo, Jose (2000). Beyond the Turing test. Journal of Logic, Language and Information 9 (4):447-466.   (Cited by 2 | Google | More links)
    Abstract: The main factor of intelligence is defined as the ability tocomprehend, formalising this ability with the help of new constructsbased on descriptional complexity. The result is a comprehension test,or C-test, which is exclusively defined in computational terms. Due toits absolute and non-anthropomorphic character, it is equally applicableto both humans and non-humans. Moreover, it correlates with classicalpsychometric tests, thus establishing the first firm connection betweeninformation theoretical notions and traditional IQ tests. The TuringTest is compared with the C-test and the combination of the two isquestioned. In consequence, the idea of using the Turing Test as apractical test of intelligence should be surpassed, and substituted bycomputational and factorial tests of different cognitive abilities, amuch more useful approach for artificial intelligence progress and formany other intriguing questions that present themselves beyond theTuring Test
    Hofstadter, Douglas R. (1981). A coffee-house conversation on the Turing test. Scientific American.   (Annotation | Google)
    Jacquette, Dale (1993). A Turing test conversation. Philosophy 68 (264):231-33.   (Cited by 4 | Google)
    Jacquette, Dale (1993). Who's afraid of the Turing test? Behavior and Philosophy 20 (21):63-74.   (Annotation | Google)
    Karelis, Charles (1986). Reflections on the Turing test. Journal for the Theory of Social Behavior 16 (July):161-72.   (Cited by 10 | Google | More links)
    Klsadlkjfs, Addssdf (ms). mind thing.   (Google)
    Lee, E. T. (1996). On the Turing test for artificial intelligence. Kybernetes 25.   (Cited by 1 | Google)
    Leiber, Justin (1995). On Turing's Turing test and why the matter matters. Synthese 104 (1):59-69.   (Cited by 6 | Annotation | Google)
    Leiber, Justin (1989). Shanon on the Turing test. Journal of Social Behavior 19 (June):257-259.   (Cited by 6 | Google | More links)
    Leiber, Justin (2001). Turing and the fragility and insubstantiality of evolutionary explanations: A puzzle about the unity of Alan Turing's work with some larger implications. Philosophical Psychology 14 (1):83-94.   (Google | More links)
    Abstract: As is well known, Alan Turing drew a line, embodied in the "Turing test," between intellectual and physical abilities, and hence between cognitive and natural sciences. Less familiarly, he proposed that one way to produce a "passer" would be to educate a "child machine," equating the experimenter's improvements in the initial structure of the child machine with genetic mutations, while supposing that the experimenter might achieve improvements more expeditiously than natural selection. On the other hand, in his foundational "On the chemical basis of morphogenesis," Turing insisted that biological explanation clearly confine itself to purely physical and chemical means, eschewing vitalist and teleological talk entirely and hewing to D'Arcy Thompson's line that "evolutionary 'explanations,'" are historical and narrative in character, employing the same intentional and teleological vocabulary we use in doing human history, and hence, while perhaps on occasion of heuristic value, are not part of biology as a natural science. To apply Turing's program to recent issues, the attempt to give foundations to the social and cognitive sciences in the "real science" of evolutionary biology (as opposed to Turing's biology) is neither to give foundations, nor to achieve the unification of the social/cognitive sciences and the natural sciences
    Leiber, Justin (2006). Turing's golden: How well Turing's work stands today. Philosophical Psychology 19 (1):13-46.   (Google | More links)
    Abstract: A. M. Turing has bequeathed us a conceptulary including 'Turing, or Turing-Church, thesis', 'Turing machine', 'universal Turing machine', 'Turing test' and 'Turing structures', plus other unnamed achievements. These include a proof that any formal language adequate to express arithmetic contains undecidable formulas, as well as achievements in computer science, artificial intelligence, mathematics, biology, and cognitive science. Here it is argued that these achievements hang together and have prospered well in the 50 years since Turing's death
    Lockhart, Robert S. (2000). Modularity, cognitive penetrability and the Turing test. Psycoloquy.   (Cited by 1 | Google | More links)
    Abstract: The Turing Test blurs the distinction between a model and irrelevant) instantiation details. Modeling only functional modules is problematic if these are interconnected and cognitively penetrable
    Mays, W. (1952). Can machines think? Philosophy 27 (April):148-62.   (Cited by 7 | Google)
    Michie, Donald (1993). Turing's test and conscious thought. Artificial Intelligence 60:1-22.   (Cited by 19 | Google)
    Midgley, Mary (1995). Zombies and the Turing test. Journal of Consciousness Studies 2 (4):351-352.   (Google)
    Millar, P. (1973). On the point of the imitation game. Mind 82 (October):595-97.   (Cited by 9 | Google | More links)
    Mitchell, Robert W. & Anderson, James R. (1998). Primate theory of mind is a Turing test. Behavioral and Brain Sciences 21 (1):127-128.   (Google)
    Abstract: Heyes's literature review of deception, imitation, and self-recognition is inadequate, misleading, and erroneous. The anaesthetic artifact hypothesis of self-recognition is unsupported by the data she herself examines. Her proposed experiment is tantalizing, indicating that theory of mind is simply a Turing test
    Moor, James H. (1976). An analysis of Turing's test. Philosophical Studies 30:249-257.   (Annotation | Google)
    Moor, James H. (1976). An analysis of the Turing test. Philosophical Studies 30 (4).   (Google)
    Moor, James H. (1978). Explaining computer behavior. Philosophical Studies 34 (October):325-7.   (Cited by 9 | Annotation | Google | More links)
    Moor, James H. (2001). The status and future of the Turing test. Minds and Machines 11 (1):77-93.   (Cited by 9 | Google | More links)
    Abstract:   The standard interpretation of the imitation game is defended over the rival gender interpretation though it is noted that Turing himself proposed several variations of his imitation game. The Turing test is then justified as an inductive test not as an operational definition as commonly suggested. Turing's famous prediction about his test being passed at the 70% level is disconfirmed by the results of the Loebner 2000 contest and the absence of any serious Turing test competitors from AI on the horizon. But, reports of the death of the Turing test and AI are premature. AI continues to flourish and the test continues to play an important philosophical role in AI. Intelligence attribution, methodological, and visionary arguments are given in defense of a continuing role for the Turing test. With regard to Turing's predictions one is disconfirmed, one is confirmed, but another is still outstanding
    Nichols, Shaun & Stich, Stephen P. (1994). Folk psychology. Encyclopedia of Cognitive Science.   (Cited by 2 | Google | More links)
    Abstract: For the last 25 years discussions and debates about commonsense psychology (or “folk psychology,” as it is often called) have been center stage in the philosophy of mind. There have been heated disagreements both about what folk psychology is and about how it is related to the scientific understanding of the mind/brain that is emerging in psychology and the neurosciences. In this chapter we will begin by explaining why folk psychology plays such an important role in the philosophy of mind. Doing that will require a quick look at a bit of the history of philosophical discussions about the mind. We’ll then turn our attention to the lively contemporary discussions aimed at clarifying the philosophical role that folk psychology is expected to play and at using findings in the cognitive sciences to get a clearer understanding of the exact nature of folk psychology
    Oppy, Graham & Dowe, D. (online). The Turing test. Stanford Encyclopedia of Philosophy.   (Cited by 3 | Google)
    Piccinini, Gualtiero (2000). Turing's rules for the imitation game. Minds and Machines 10 (4):573-582.   (Cited by 10 | Google | More links)
    Abstract:   In the 1950s, Alan Turing proposed his influential test for machine intelligence, which involved a teletyped dialogue between a human player, a machine, and an interrogator. Two readings of Turing''s rules for the test have been given. According to the standard reading of Turing''s words, the goal of the interrogator was to discover which was the human being and which was the machine, while the goal of the machine was to be indistinguishable from a human being. According to the literal reading, the goal of the machine was to simulate a man imitating a woman, while the interrogator – unaware of the real purpose of the test – was attempting to determine which of the two contestants was the woman and which was the man. The present work offers a study of Turing''s rules for the test in the context of his advocated purpose and his other texts. The conclusion is that there are several independent and mutually reinforcing lines of evidence that support the standard reading, while fitting the literal reading in Turing''s work faces severe interpretative difficulties. So, the controversy over Turing''s rules should be settled in favor of the standard reading
    Purthill, R. (1971). Beating the imitation game. Mind 80 (April):290-94.   (Google | More links)
    Rankin, Terry L. (1987). The Turing paradigm: A critical assessment. Dialogue 29 (April):50-55.   (Cited by 3 | Annotation | Google)
    Rapaport, William J. (2000). How to pass a Turing test: Syntactic semantics, natural-language understanding, and first-person cognition. Journal of Logic, Language, and Information 9 (4):467-490.   (Cited by 15 | Google | More links)
    Rapaport, William J. (2000). How to pass a Turing test. Journal of Logic, Language and Information 9 (4).   (Google)
    Abstract: I advocate a theory of syntactic semantics as a way of understanding how computers can think (and how the Chinese-Room-Argument objection to the Turing Test can be overcome): (1) Semantics, considered as the study of relations between symbols and meanings, can be turned into syntax – a study of relations among symbols (including meanings) – and hence syntax (i.e., symbol manipulation) can suffice for the semantical enterprise (contra Searle). (2) Semantics, considered as the process of understanding one domain (by modeling it) in terms of another, can be viewed recursively: The base case of semantic understanding –understanding a domain in terms of itself – is syntactic understanding. (3) An internal (or narrow), first-person point of view makes an external (or wide), third-person point of view otiose for purposes of understanding cognition
    Rapaport, William J. (online). Review of The Turing Test: Verbal Behavior As the Hallmark of Intelligence.   (Google | More links)
    Abstract: Stuart M. Shieber’s name is well known to computational linguists for his research and to computer scientists more generally for his debate on the Loebner Turing Test competition, which appeared a decade earlier in Communications of the ACM (Shieber 1994a, 1994b; Loebner 1994).1 With this collection, I expect it to become equally well known to philosophers
    Ravenscroft, Ian (online). Folk psychology as a theory. Stanford Encyclopedia of Philosophy.   (Cited by 9 | Google | More links)
    Abstract: Many philosophers and cognitive scientists claim that our everyday or "folk" understanding of mental states constitutes a theory of mind. That theory is widely called "folk psychology" (sometimes "commonsense" psychology). The terms in which folk psychology is couched are the familiar ones of "belief" and "desire", "hunger", "pain" and so forth. According to many theorists, folk psychology plays a central role in our capacity to predict and explain the behavior of ourselves and others. However, the nature and status of folk psychology remains controversial
    Rhodes, Kris (ms). Vindication of the Rights of Machine.   (Google | More links)
    Abstract: In this paper, I argue that certain Machines can have rights independently of whether they are sentient, or conscious, or whatever you might call it.
    Richardson, Robert C. (1982). Turing tests for intelligence: Ned Block's defense of psychologism. Philosophical Studies 41 (May):421-6.   (Cited by 4 | Annotation | Google | More links)
    Rosenberg, Jay F. (1982). Conversation and intelligence. In B. de Gelder (ed.), Knowledge and Representation. Routledge & Kegan Paul.   (Google)
    Sampson, Geoffrey (1973). In defence of Turing. Mind 82 (October):592-94.   (Cited by 5 | Google | More links)
    Sato, Y. & Ikegami, T. (2004). Undecidability in the imitation game. Minds and Machines 14 (2):133-43.   (Cited by 6 | Google | More links)
    Abstract:   This paper considers undecidability in the imitation game, the so-called Turing Test. In the Turing Test, a human, a machine, and an interrogator are the players of the game. In our model of the Turing Test, the machine and the interrogator are formalized as Turing machines, allowing us to derive several impossibility results concerning the capabilities of the interrogator. The key issue is that the validity of the Turing test is not attributed to the capability of human or machine, but rather to the capability of the interrogator. In particular, it is shown that no Turing machine can be a perfect interrogator. We also discuss meta-imitation game and imitation game with analog interfaces where both the imitator and the interrogator are mimicked by continuous dynamical systems
    Saygin, Ayse P.; Cicekli, Ilyas & Akman, Varol (2000). Turing test: 50 years later. Minds and Machines 10 (4):463-518.   (Cited by 45 | Google | More links)
    Abstract:   The Turing Test is one of the most disputed topics in artificial intelligence, philosophy of mind, and cognitive science. This paper is a review of the past 50 years of the Turing Test. Philosophical debates, practical developments and repercussions in related disciplines are all covered. We discuss Turing''s ideas in detail and present the important comments that have been made on them. Within this context, behaviorism, consciousness, the `other minds'' problem, and similar topics in philosophy of mind are discussed. We also cover the sociological and psychological aspects of the Turing Test. Finally, we look at the current situation and analyze programs that have been developed with the aim of passing the Turing Test. We conclude that the Turing Test has been, and will continue to be, an influential and controversial topic
    Saygin, A. P. & Cicekli, I. (2000). Turing test: 50 years later. Minds and Machines 10 (4):463-518.   (Cited by 44 | Google | More links)
    Abstract: The Turing Test is one of the most disputed topics in artificial intelligence, philosophy of mind, and cognitive science. This paper is a review of the past 50 years of the Turing Test. Philo- sophical debates, practical developments and repercussions in related disciplines are all covered. We discuss Turing’s ideas in detail and present the important comments that have been made on them. Within this context, behaviorism, consciousness, the ‘other minds’ problem, and similar topics in philosophy of mind are discussed. We also cover the sociological and psychological aspects of the Turing Test. Finally, we look at the current situation and analyze programs that have been developed with the aim of passing the Turing Test. We conclude that the Turing Test has been, and will continue to be, an influential and controversial topic
    Schweizer, Paul (1998). The truly total Turing test. Minds and Machines 8 (2):263-272.   (Cited by 9 | Google | More links)
    Abstract:   The paper examines the nature of the behavioral evidence underlying attributions of intelligence in the case of human beings, and how this might be extended to other kinds of cognitive system, in the spirit of the original Turing Test (TT). I consider Harnad's Total Turing Test (TTT), which involves successful performance of both linguistic and robotic behavior, and which is often thought to incorporate the very same range of empirical data that is available in the human case. However, I argue that the TTT is still too weak, because it only tests the capabilities of particular tokens within a preexisting context of intelligent behavior. What is needed is a test of the cognitive type, as manifested through a number of exemplary tokens, in order to confirm that the cognitive type is able to produce the context of intelligent behavior presupposed by tests such as the TT and TTT
    Sennett, James F. (ms). The ice man cometh: Lt. comander data and the Turing test.   (Google)
    Shanon, Benny (1989). A simple comment regarding the Turing test. Journal for the Theory of Social Behavior 19 (June):249-56.   (Cited by 8 | Annotation | Google | More links)
    Shah, Huma & Warwick, Kevin (forthcoming). From the Buzzing in Turing’s Head to Machine Intelligence Contests. TCIT 2010.   (Google)
    Abstract: This paper presents an analysis of three major contests for machine intelligence. We conclude that a new era for Turing’s test requires a fillip in the guise of a committed sponsor, not unlike DARPA, funders of the successful 2007 Urban Challenge.
    Shieber, Stuart M. (1994). Lessons from a restricted Turing test. Communications of the Association for Computing Machinery 37:70-82.   (Cited by 55 | Google | More links)
    Shieber, Stuart M. (ed.) (2004). The Turing Test: Verbal Behavior As the Hallmark of Intelligence. MIT Press.   (Cited by 12 | Google | More links)
    Abstract: Stuart M. Shieber’s name is well known to computational linguists for his research and to computer scientists more generally for his debate on the Loebner Turing Test competition, which appeared a decade earlier in Communications of the ACM (Shieber 1994a, 1994b; Loebner 1994).1 With this collection, I expect it to become equally well known to philosophers
    Shieber, Stuart M. (2007). The Turing test as interactive proof. Noûs 41 (4):686–713.   (Google | More links)
    Stalker, Douglas F. (1978). Why machines can't think: A reply to James Moor. Philosophical Studies 34 (3):317-20.   (Cited by 12 | Annotation | Google | More links)
    Sterrett, Susan G. (2002). Nested algorithms and the original imitation game test: A reply to James Moor. Minds and Machines 12 (1):131-136.   (Cited by 2 | Google | More links)
    Stevenson, John G. (1976). On the imitation game. Philosophia 6 (March):131-33.   (Cited by 4 | Google | More links)
    Sterrett, Susan G. (2000). Turing's two tests for intelligence. Minds and Machines 10 (4):541-559.   (Cited by 10 | Google | More links)
    Abstract:   On a literal reading of `Computing Machinery and Intelligence'', Alan Turing presented not one, but two, practical tests to replace the question `Can machines think?'' He presented them as equivalent. I show here that the first test described in that much-discussed paper is in fact not equivalent to the second one, which has since become known as `the Turing Test''. The two tests can yield different results; it is the first, neglected test that provides the more appropriate indication of intelligence. This is because the features of intelligence upon which it relies are resourcefulness and a critical attitude to one''s habitual responses; thus the test''s applicablity is not restricted to any particular species, nor does it presume any particular capacities. This is more appropriate because the question under consideration is what would count as machine intelligence. The first test realizes a possibility that philosophers have overlooked: a test that uses a human''s linguistic performance in setting an empirical test of intelligence, but does not make behavioral similarity to that performance the criterion of intelligence. Consequently, the first test is immune to many of the philosophical criticisms on the basis of which the (so-called) `Turing Test'' has been dismissed
    Stoica, Cristi, Turing test, easy to pass; human mind, hard to understand.   (Google)
    Abstract: Under general assumptions, the Turing test can be easily passed by an appropriate algorithm. I show that for any test satisfying several general conditions, we can construct an algorithm that can pass that test, hence, any operational definition is easy to fulfill. I suggest a test complementary to Turing's test, which will measure our understanding of the human mind. The Turing test is required to fix the operational specifications of the algorithm under test; under this constrain, the additional test simply consists in measuring the length of the algorithm
    Traiger, Saul (2000). Making the right identification in the Turing test. Minds and Machines 10 (4):561-572.   (Cited by 7 | Google | More links)
    Abstract:   The test Turing proposed for machine intelligence is usually understood to be a test of whether a computer can fool a human into thinking that the computer is a human. This standard interpretation is rejected in favor of a test based on the Imitation Game introduced by Turing at the beginning of "Computing Machinery and Intelligence."
    Turney, Peter (ms). Answering subcognitive Turing test questions: A reply to French.   (Cited by 5 | Google | More links)
    Abstract: Robert French has argued that a disembodied computer is incapable of passing a Turing Test that includes subcognitive questions. Subcognitive questions are designed to probe the network of cultural and perceptual associations that humans naturally develop as we live, embodied and embedded in the world. In this paper, I show how it is possible for a disembodied computer to answer subcognitive questions appropriately, contrary to French’s claim. My approach to answering subcognitive questions is to use statistical information extracted from a very large collection of text. In particular, I show how it is possible to answer a sample of subcognitive questions taken from French, by issuing queries to a search engine that indexes about 350 million Web pages. This simple algorithm may shed light on the nature of human (sub-) cognition, but the scope of this paper is limited to demonstrating that French is mistaken: a disembodied computer can answer subcognitive questions
    Turing, Alan M. (1950). Computing machinery and intelligence. Mind 59 (October):433-60.   (Cited by 9 | Annotation | Google | More links)
    Abstract: I propose to consider the question, "Can machines think?" This should begin with definitions of the meaning of the terms "machine" and "think." The definitions might be framed so as to reflect so far as possible the normal use of the words, but this attitude is dangerous, If the meaning of the words "machine" and "think" are to be found by examining how they are commonly used it is difficult to escape the conclusion that the meaning and the answer to the question, "Can machines think?" is to be sought in a statistical survey such as a Gallup poll. But this is absurd. Instead of attempting such a definition I shall replace the question by another, which is closely related to it and is expressed in relatively unambiguous words. The new form of the problem can be described in terms of a game which we call the 'imitation game." It is played with three people, a man (A), a woman (B), and an interrogator (C) who may be of either sex. The interrogator stays in a room apart front the other two. The object of the game for the interrogator is to determine which of the other two is the man and which is the woman. He knows them by labels X and Y, and at the end of the game he says either "X is A and Y is B" or "X is B and Y is A." The interrogator is allowed to put questions to A and B. We now ask the question, "What will happen when a machine takes the part of A in this game?" Will the interrogator decide wrongly as often when the game is played like this as he does when the game is played between a man and a woman? These questions replace our original, "Can machines think?"
    Vergauwen, Roger & González, Rodrigo (2005). On the verisimilitude of artificial intelligence. Logique Et Analyse- 190 (189):323-350.   (Google)
    Ward, Andrew (1989). Radical interpretation and the Gunderson game. Dialectica 43 (3):271-280.   (Google)
    Watt, S. (1996). Naive psychology and the inverted Turing test. Psycoloquy 7 (14).   (Cited by 19 | Google | More links)
    Abstract: This target article argues that the Turing test implicitly rests on a "naive psychology," a naturally evolved psychological faculty which is used to predict and understand the behaviour of others in complex societies. This natural faculty is an important and implicit bias in the observer's tendency to ascribe mentality to the system in the test. The paper analyses the effects of this naive psychology on the Turing test, both from the side of the system and the side of the observer, and then proposes and justifies an inverted version of the test which allows the processes of ascription to be analysed more directly than in the standard version
    Waterman, C. (1995). The Turing test and the argument from analogy for other minds. Southwest Philosophy Review 11 (1):15-22.   (Google)
    Whitby, Blay (1996). The Turing test: Ai's biggest blind Alley? In Peter Millican & A. Clark (eds.), Machines and Thought. Oxford University Press.   (Cited by 13 | Google)
    Whitby, Blay (1996). Why the Turing test is ai's biggest blind Alley. In Peter Millican & A. Clark (eds.), Machines and Thought, The Legacy of Alan Turing. Oup.   (Google)
    Zdenek, Sean (2001). Passing loebner's Turing test: A case of conflicting discourse functions. Minds and Machines 11 (1):53-76.   (Cited by 8 | Google | More links)
    Abstract:   This paper argues that the Turing test is based on a fixed and de-contextualized view of communicative competence. According to this view, a machine that passes the test will be able to communicate effectively in a variety of other situations. But the de-contextualized view ignores the relationship between language and social context, or, to put it another way, the extent to which speakers respond dynamically to variations in discourse function, formality level, social distance/solidarity among participants, and participants' relative degrees of power and status (Holmes, 1992). In the case of the Loebner Contest, a present day version of the Turing test, the social context of interaction can be interpreted in conflicting ways. For example, Loebner discourse is defined 1) as a friendly, casual conversation between two strangers of equal power, and 2) as a one-way transaction in which judges control the conversational floor in an attempt to expose contestants that are not human. This conflict in discourse function is irrelevant so long as the goal of the contest is to ensure that only thinking, human entities pass the test. But if the function of Loebner discourse is to encourage the production of software that can pass for human on the level of conversational ability, then the contest designers need to resolve this ambiguity in discourse function, and thus also come to terms with the kind of competence they are trying to measure

    6.1b Godelian arguments

    Benacerraf, Paul (1967). God, the devil, and Godel. The Monist 51 (January):9-32.   (Annotation | Google)
    Bojadziev, Damjan (1997). Mind versus Godel. In Matjaz Gams & M. Wu Paprzycki (eds.), Mind Versus Computer. IOS Press.   (Cited by 1 | Google | More links)
    Bowie, G. Lee (1982). Lucas' number is finally up. Journal of Philosophy Logic 11 (August):279-85.   (Cited by 10 | Annotation | Google | More links)
    Boyer, David L. (1983). R. Lucas, Kurt Godel, and Fred astaire. Philosophical Quarterly 33 (April):147-59.   (Annotation | Google | More links)
    Bringsjord, Selmer & Xiao, H. (2000). A refutation of Penrose's new Godelian case against the computational conception of mind. Journal of Experimental and Theoretical Artificial Intelligence 12.   (Google)
    Chari, C. T. K. (1963). Further comments on minds, machines and Godel. Philosophy 38 (April):175-8.   (Annotation | Google)
    Chalmers, David J. (1996). Minds, machines, and mathematics. Psyche 2:11-20.   (Cited by 17 | Google | More links)
    Abstract: In his stimulating book SHADOWS OF THE MIND, Roger Penrose presents arguments, based on Gödel's theorem, for the conclusion that human thought is uncomputable. There are actually two separate arguments in Penrose's book. The second has been widely ignored, but seems to me to be much more interesting and novel than the first. I will address both forms of the argument in some detail. Toward the end, I will also comment on Penrose's proposals for a "new science of consciousness"
    Chihara, C. (1972). On alleged refutations of mechanism using Godel's incompleteness results. Journal of Philosophy 69 (September):507-26.   (Cited by 9 | Annotation | Google | More links)
    Coder, David (1969). Godel's theorem and mechanism. Philosophy 44 (September):234-7.   (Annotation | Google)
    Copeland, Jack (1998). Turing's o-machines, Searle, Penrose, and the brain. Analysis 58 (2):128-138.   (Cited by 15 | Google | More links)
    Abstract: In his PhD thesis (1938) Turing introduced what he described as 'a new kind of machine'. He called these 'O-machines'. The present paper employs Turing's concept against a number of currently fashionable positions in the philosophy of mind
    Dennett, Daniel C. (1989). Murmurs in the cathedral: Review of R. Penrose, The Emperor's New Mind. Times Literary Supplement (September) 29.   (Cited by 5 | Google)
    Abstract: The idea that a computer could be conscious--or equivalently, that human consciousness is the effect of some complex computation mechanically performed by our brains--strikes some scientists and philosophers as a beautiful idea. They find it initially surprising and unsettling, as all beautiful ideas are, but the inevitable culmination of the scientific advances that have gradually demystified and unified the material world. The ideologues of Artificial Intelligence (AI) have been its most articulate supporters. To others, this idea is deeply repellent: philistine, reductionistic (in some bad sense), as incredible as it is offensive. John Searle's attack on "strong AI" is the best known expression of this view, but others in the same camp, liking Searle's destination better than his route, would dearly love to see a principled, scientific argument showing that strong AI is impossible. Roger Penrose has set out to provide just such an argument
    Dennett, Daniel C. (1978). The abilities of men and machines. In Brainstorms. MIT Press.   (Cited by 3 | Annotation | Google)
    Edis, Taner (1998). How Godel's theorem supports the possibility of machine intelligence. Minds and Machines 8 (2):251-262.   (Google | More links)
    Abstract:   Gödel's Theorem is often used in arguments against machine intelligence, suggesting humans are not bound by the rules of any formal system. However, Gödelian arguments can be used to support AI, provided we extend our notion of computation to include devices incorporating random number generators. A complete description scheme can be given for integer functions, by which nonalgorithmic functions are shown to be partly random. Not being restricted to algorithms can be accounted for by the availability of an arbitrary random function. Humans, then, might not be rule-bound, but Gödelian arguments also suggest how the relevant sort of nonalgorithmicity may be trivially made available to machines
    Feferman, S. (1996). Penrose's Godelian argument. Psyche 2:21-32.   (Google)
    Abstract: In his book Shadows of the Mind: A search for the missing science of con- sciousness [SM below], Roger Penrose has turned in another bravura perfor- mance, the kind we have come to expect ever since The Emperor’s New Mind [ENM ] appeared. In the service of advancing his deep convictions and daring conjectures about the nature of human thought and consciousness, Penrose has once more drawn a wide swath through such topics as logic, computa- tion, artificial intelligence, quantum physics and the neuro-physiology of the brain, and has produced along the way many gems of exposition of difficult mathematical and scientific ideas, without condescension, yet which should be broadly appealing.1 While the aims and a number of the topics in SM are the same as in ENM , the focus now is much more on the two axes that Pen- rose grinds in earnest. Namely, in the first part of SM he argues anew and at great length against computational models of the mind and more specifi- cally against any account of mathematical thought in computational terms. Then in the second part, he argues that there must be a scientific account of consciousness but that will require a (still to be found) non-computational extension or modification of present-day quantum physics
    Gaifman, H. (2000). What Godel's incompleteness result does and does not show. Journal of Philosophy 97 (8):462-471.   (Cited by 3 | Google | More links)
    Abstract: In a recent paper S. McCall adds another link to a chain of attempts to enlist Gödel’s incompleteness result as an argument for the thesis that human reasoning cannot be construed as being carried out by a computer.1 McCall’s paper is undermined by a technical oversight. My concern however is not with the technical point. The argument from Gödel’s result to the no-computer thesis can be made without following McCall’s route; it is then straighter and more forceful. Yet the argument fails in an interesting and revealing way. And it leaves a remainder: if some computer does in fact simulate all our mathematical reasoning, then, in principle, we cannot fully grasp how it works. Gödel’s result also points out a certain essential limitation of self-reflection. The resulting picture parallels, not accidentally, Davidson’s view of psychology, as a science that in principle must remain “imprecise”, not fully spelt out. What is intended here by “fully grasp”, and how all this is related to self-reflection, will become clear at the end of this comment
    George, A. & Velleman, Daniel J. (2000). Leveling the playing field between mind and machine: A reply to McCall. Journal of Philosophy 97 (8):456-452.   (Cited by 3 | Google | More links)
    George, F. H. (1962). Minds, machines and Godel: Another reply to mr. Lucas. Philosophy 37 (January):62-63.   (Annotation | Google)
    Gertler, Brie (2004). Simulation theory on conceptual grounds. Protosociology 20:261-284.   (Google)
    Abstract: I will present a conceptual argument for a simulationist answer to (2). Given that our conception of mental states is employed in attributing mental states to others, a simulationist answer to (2) supports a simulationist answer to (1). I will not address question (3). Answers to (1) and (2) do not yield an answer to (3), since (1) and (2) concern only our actual practices and concepts. For instance, an error theory about (1) and (2) would say that our practices and concepts manifest a mistaken view about the real nature of the mental. Finally, I will not address question (2a), which is an empirical question and so is not immediately relevant to the conceptual argument that is of concern here
    Good, I. J. (1969). Godel's theorem is a red Herring. British Journal for the Philosophy of Science 19 (February):357-8.   (Cited by 8 | Annotation | Google | More links)
    Good, I. J. (1967). Human and machine logic. British Journal for the Philosophy of Science 18 (August):145-6.   (Cited by 7 | Annotation | Google | More links)
    Gordon, Robert M. (online). Folk Psychology As Mental Simulation. Stanford Encyclopedia of Philosophy.   (Cited by 8 | Google)
    Abstract: by, or is otherwise relevant to the seminar "Folk Psychology vs. Mental Simulation: How Minds Understand Minds," a National
    Grush, Rick & Churchland, P. (1995). Gaps in Penrose's toiling. In Thomas Metzinger (ed.), Conscious Experience. Ferdinand Schoningh.   (Google | More links)
    Abstract: Using the Gödel Incompleteness Result for leverage, Roger Penrose has argued that the mechanism for consciousness involves quantum gravitational phenomena, acting through microtubules in neurons. We show that this hypothesis is implausible. First, the Gödel Result does not imply that human thought is in fact non algorithmic. Second, whether or not non algorithmic quantum gravitational phenomena actually exist, and if they did how that could conceivably implicate microtubules, and if microtubules were involved, how that could conceivably implicate consciousness, is entirely speculative. Third, cytoplasmic ions such as calcium and sodium are almost certainly present in the microtubule pore, barring the quantum mechanical effects Penrose envisages. Finally, physiological evidence indicates that consciousness does not directly depend on microtubule properties in any case, rendering doubtful any theory according to which consciousness is generated in the microtubules
    Hadley, Robert F. (1987). Godel, Lucas, and mechanical models of mind. Computational Intelligence 3:57-63.   (Cited by 1 | Annotation | Google | More links)
    Hanson, William H. (1971). Mechanism and Godel's theorem. British Journal for the Philosophy of Science 22 (February):9-16.   (Annotation | Google | More links)
    Hofstadter, Douglas R. (1979). Godel, Escher, Bach: An Eternal Golden Braid. Basic Books.   (Cited by 65 | Annotation | Google | More links)
    Hutton, A. (1976). This Godel is killing me. Philosophia 3 (March):135-44.   (Annotation | Google)
    Irvine, Andrew D. (1983). Lucas, Lewis, and mechanism -- one more time. Analysis 43 (March):94-98.   (Annotation | Google)
    Jacquette, Dale (1987). Metamathematical criteria for minds and machines. Erkenntnis 27 (July):1-16.   (Cited by 3 | Annotation | Google | More links)
    Ketland, Jeffrey & Raatikainen, Panu (online). Truth and provability again.   (Google)
    King, D. (1996). Is the human mind a Turing machine? Synthese 108 (3):379-89.   (Google | More links)
    Abstract:   In this paper I discuss the topics of mechanism and algorithmicity. I emphasise that a characterisation of algorithmicity such as the Turing machine is iterative; and I argue that if the human mind can solve problems that no Turing machine can, the mind must depend on some non-iterative principle — in fact, Cantor's second principle of generation, a principle of the actual infinite rather than the potential infinite of Turing machines. But as there has been theorisation that all physical systems can be represented by Turing machines, I investigate claims that seem to contradict this: specifically, claims that there are noncomputable phenomena. One conclusion I reach is that if it is believed that the human mind is more than a Turing machine, a belief in a kind of Cartesian dualist gulf between the mental and the physical is concomitant
    Kirk, Robert E. (1986). Mental machinery and Godel. Synthese 66 (March):437-452.   (Annotation | Google)
    Laforte, Geoffrey; Hayes, Pat & Ford, Kenneth M. (1998). Why Godel's theorem cannot refute computationalism: A reply to Penrose. Artificial Intelligence 104.   (Google)
    Leslie, Alan M.; Nichols, Shaun; Stich, Stephen P. & Klein, David B. (1996). Varieties of off-line simulation. In P. Carruthers & P. Smith (eds.), Theories of Theories of Mind. Cambridge University Press.   (Google)
    Abstract: In the last few years, off-line simulation has become an increasingly important alternative to standard explanations in cognitive science. The contemporary debate began with Gordon (1986) and Goldman's (1989) off-line simulation account of our capacity to predict behavior. On their view, in predicting people's behavior we take our own decision making system `off line' and supply it with the `pretend' beliefs and desires of the person whose behavior we are trying to predict; we then let the decision maker reach a decision on the basis of these pretend inputs. Figure 1 offers a `boxological' version of the off-line simulation theory of behavior prediction.(1)
    Lewis, David (1969). Lucas against mechanism. Philosophy 44 (June):231-3.   (Cited by 10 | Annotation | Google)
    Lewis, David (1979). Lucas against mechanism II. Canadian Journal of Philosophy 9 (June):373-6.   (Cited by 7 | Annotation | Google)
    Lindstrom, Per (2006). Remarks on Penrose's new argument. Journal of Philosophical Logic 35 (3):231-237.   (Google | More links)
    Abstract: It is commonly agreed that the well-known Lucas–Penrose arguments and even Penrose’s ‘new argument’ in [Penrose, R. (1994): Shadows of the Mind, Oxford University Press] are inconclusive. It is, perhaps, less clear exactly why at least the latter is inconclusive. This note continues the discussion in [Lindström, P. (2001): Penrose’s new argument, J. Philos. Logic 30, 241–250; Shapiro, S.(2003): Mechanism, truth, and Penrose’s new argument, J. Philos. Logic 32, 19–42] and elsewhere of this question
    Lucas, John R. (1967). Human and machine logic: A rejoinder. British Journal for the Philosophy of Science 19 (August):155-6.   (Cited by 3 | Annotation | Google | More links)
    Abstract: We can imagine a human operator playing a game of one-upmanship against a programmed computer. If the program is Fn, the human operator can print the theorem Gn, which the programmed computer, or, if you prefer, the program, would never print, if it is consistent. This is true for each whole number n, but the victory is a hollow one since a second computer, loaded with program C, could put the human operator out of a job.... It is useless for the `mentalist' to argue that any given program can always be improves since the process for improving programs can presumably be programmed also; certainly this can be done if the mentalist describes how the improvement is to be made. If he does give such a description, then he has not made a case
    Lucas, John R. (1984). Lucas against mechanism II: A rejoinder. Canadian Journal of Philosophy 14 (June):189-91.   (Cited by 2 | Annotation | Google)
    Lucas, John R. (1970). Mechanism: A rejoinder. Philosophy 45 (April):149-51.   (Annotation | Google)
    Lucas, John R. (1971). Metamathematics and the philosophy of mind: A rejoinder. Philosophy of Science 38 (2):310-13.   (Cited by 4 | Google | More links)
    Lucas, John R. (1961). Minds, machines and Godel. Philosophy 36 (April-July):112-127.   (Cited by 72 | Annotation | Google | More links)
    Abstract: Goedel's theorem states that in any consistent system which is strong enough to produce simple arithmetic there are formulae which cannot be proved-in-the-system, but which we can see to be true. Essentially, we consider the formula which says, in effect, "This formula is unprovable-in-the-system". If this formula were provable-in-the-system, we should have a contradiction: for if it were provablein-the-system, then it would not be unprovable-in-the-system, so that "This formula is unprovable-in-the-system" would be false: equally, if it were provable-in-the-system, then it would not be false, but would be true, since in any consistent system nothing false can be provedin-the-system, but only truths. So the formula "This formula is unprovable-in-the-system" is not provable-in-the-system, but unprovablein-the-system. Further, if the formula "This formula is unprovablein- the-system" is unprovable-in-the-system, then it is true that that formula is unprovable-in-the-system, that is, "This formula is unprovable-in-the-system" is true. Goedel's theorem must apply to cybernetical machines, because it is of the essence of being a machine, that it should be a concrete instantiation of a formal system. It follows that given any machine which is consistent and capable of doing simple arithmetic, there is a formula which it is incapable of producing as being true---i.e., the formula is unprovable-in-the-system-but which we can see to be true. It follows that no machine can be a complete or adequate model of the mind, that minds are essentially different from machines
    Lucas, John R. (1996). Mind, machines and Godel: A retrospect. In Peter Millican & A. Clark (eds.), Machines and Thought. Oxford University Press.   (Annotation | Google)
    Lucas, John R. (1968). Satan stultified: A rejoinder to Paul Benacerraf. The Monist 52 (1):145-58.   (Cited by 10 | Annotation | Google)
    Abstract: The argument is a dialectical one. It is not a direct proof that the mind is something more than a machine, but a schema of disproof for any particular version of mechanism that may be put forward. If the mechanist maintains any specific thesis, I show that [146] a contradiction ensues. But only if. It depends on the mechanist making the first move and putting forward his claim for inspection. I do not think Benacerraf has quite taken the point. He criticizes me both for "failing to notice" that my ability to show that the Gödel sentence of a formal system is true "depends very much on how he is given
    Lucas, John R. & Redhead, Michael (2007). Truth and provability. British Journal for the Philosophy of Science 58 (2):331-2.   (Google | More links)
    Abstract: The views of Redhead ([2004]) are defended against the argument by Panu Raatikainen ([2005]). The importance of informal rigour is canvassed, and the argument for the a priori nature of induction is explained. The significance of Gödel's theorem is again rehearsed
    Lucas, John R. (1970). The Freedom of the Will. Oxford University Press.   (Cited by 22 | Google)
    Abstract: It might be the case that absence of constraint is the relevant sense of ' freedom' when we are discussing the freedom of the will, but it needs arguing for. ...
    Lucas, John R. (ms). The Godelian argument: Turn over the page.   (Cited by 3 | Google)
    Abstract: I have no quarrel with the first two sentences: but the third, though charitable and courteous, is quite untrue. Although there are criticisms which can be levelled against the Gödelian argument, most of the critics have not read either of my, or either of Penrose's, expositions carefully, and seek to refute arguments we never put forward, or else propose as a fatal objection one that had already been considered and countered in our expositions of the argument. Hence my title. The Gödelian Argument uses Gödel's theorem to show that minds cannot be explained in purely mechanist terms. It has been put forward, in different forms, by Gödel himself, by Penrose, and by me
    Lucas, John R. (1976). This Godel is killing me: A rejoinder. Philosophia 6 (March):145-8.   (Annotation | Google)
    Lucas, John R. (ms). The implications of Godel's theorem.   (Google | More links)
    Abstract: In 1931 Kurt Gödel proved two theorems about the completeness and consistency of first-order arithmetic. Their implications for philosophy are profound. Many fashionable tenets are shown to be untenable: many traditional intuitions are vindicated by incontrovertible arguments
    Lyngzeidetson, Albert E. & Solomon, Martin K. (1994). Abstract complexity theory and the mind-machine problem. British Journal for the Philosophy of Science 45 (2):549-54.   (Google | More links)
    Abstract: In this paper we interpret a characterization of the Gödel speed-up phenomenon as providing support for the ‘Nagel-Newman thesis’ that human theorem recognizers differ from mechanical theorem recognizers in that the former do not seem to be limited by Gödel's incompleteness theorems whereas the latter do seem to be thus limited. However, we also maintain that (currently non-existent) programs which are open systems in that they continuously interact with, and are thus inseparable from, their environment, are not covered by the above (or probably any other recursion-theoretic) argument
    Lyngzeidetson, Albert E. (1990). Massively parallel distributed processing and a computationalist foundation for cognitive science. British Journal for the Philosophy of Science 41 (March):121-127.   (Annotation | Google | More links)
    Martin, J. & Engleman, K. (1990). The mind's I has two eyes. Philosophy 65 (264):510-515.   (Annotation | Google)
    Maudlin, Tim (1996). Between the motion and the act. Psyche 2:40-51.   (Cited by 4 | Google | More links)
    McCall, Storrs (1999). Can a Turing machine know that the Godel sentence is true? Journal of Philosophy 96 (10):525-32.   (Cited by 6 | Google | More links)
    McCullough, D. (1996). Can humans escape Godel? Psyche 2:57-65.   (Google)
    McCall, Storrs (2001). On "seeing" the truth of the Godel sentence. Facta Philosophica 3:25-30.   (Google)
    McDermott, Drew (1996). [Star] Penrose is wrong. Psyche 2:66-82.   (Google)
    Megill, Jason L. (2004). Are we paraconsistent? On the Lucas-Penrose argument and the computational theory of mind. Auslegung 27 (1):23-30.   (Google)
    Nelson, E. (2002). Mathematics and the mind. In Kunio Yasue, Marj Jibu & Tarcisio Della Senta (eds.), No Matter, Never Mind. John Benjamins.   (Cited by 2 | Google | More links)
    Penrose, Roger (1996). Beyond the doubting of a shadow. Psyche 2:89-129.   (Cited by 25 | Annotation | Google | More links)
    Penrose, Roger (1990). Precis of the emperor's new mind. Behavioral and Brain Sciences 13:643-705.   (Annotation | Google)
    Penrose, Roger (1994). Shadows of the Mind. Oxford University Press.   (Cited by 1412 | Google | More links)
    Penrose, Roger (1992). Setting the scene: The claim and the issues. In D. Broadbent (ed.), The Simulation of Human Intelligence. Blackwell.   (Annotation | Google)
    Penrose, Roger (1989). The Emperor's New Mind. Oxford University Press.   (Cited by 3 | Annotation | Google | More links)
    Piccinini, Gualtiero (2003). Alan Turing and the mathematical objection. Minds and Machines 13 (1):23-48.   (Cited by 10 | Google | More links)
    Abstract: This paper concerns Alan Turing’s ideas about machines, mathematical methods of proof, and intelligence. By the late 1930s, Kurt Gödel and other logicians, including Turing himself, had shown that no finite set of rules could be used to generate all true mathematical statements. Yet according to Turing, there was no upper bound to the number of mathematical truths provable by intelligent human beings, for they could invent new rules and methods of proof. So, the output of a human mathematician, for Turing, was not a computable sequence (i.e., one that could be generated by a Turing machine). Since computers only contained a finite number of instructions (or programs), one might argue, they could not reproduce human intelligence. Turing called this the “mathematical
    objection” to his view that machines can think. Logico-mathematical reasons, stemming from his own work, helped to convince Turing that it should be possible to reproduce human intelligence, and eventually compete with it, by developing the appropriate kind of digital computer. He felt it
    should be possible to program a computer so that it could learn or discover new rules, overcoming the limitations imposed by the incompleteness and undecidability results in the same way that human
    mathematicians presumably do.
    Priest, Graham (1994). Godel's theorem and the mind... Again. In M. Michael & John O'Leary-Hawthorne (eds.), Philosophy in Mind: The Place of Philosophy in the Study of Mind. Kluwer.   (Google)
    Putnam, Hilary (1995). Review of Shadows of the Mind. AMS Bulletin 32 (3).   (Google)
    Putnam, Hilary (1985). Reflexive reflections. Erkenntnis 22 (January):143-153.   (Cited by 8 | Annotation | Google | More links)
    Raatikainen, Panu, McCall's gödelian argument is invalid.   (Google)
    Abstract: Storrs McCall continues the tradition of Lucas and Penrose in an attempt to refute mechanism by appealing to Gödel’s incompleteness theorem (McCall 2001). That is, McCall argues that Gödel’s theorem “reveals a sharp dividing line between human and machine thinking”. According to McCall, “[h]uman beings are familiar with the distinction between truth and theoremhood, but Turing machines cannot look beyond their own output”. However, although McCall’s argumentation is slightly more sophisticated than the earlier Gödelian anti-mechanist arguments, in the end it fails badly, as it is at odds with the logical facts
    Raatikainen, Panu (2005). On the philosophical relevance of gödel's incompleteness theorems. Revue Internationale de Philosophie 59 (4):513-534.   (Google)
    Abstract: Gödel began his 1951 Gibbs Lecture by stating: “Research in the foundations of mathematics during the past few decades has produced some results which seem to me of interest, not only in themselves, but also with regard to their implications for the traditional philosophical problems about the nature of mathematics.” (Gödel 1951) Gödel is referring here especially to his own incompleteness theorems (Gödel 1931). Gödel’s first incompleteness theorem (as improved by Rosser (1936)) says that for any consistent formalized system F, which contains elementary arithmetic, there exists a sentence GF of the language of the system which is true but unprovable in that system. Gödel’s second incompleteness theorem states that no consistent formal system can prove its own consistency
    Raatikainen, Panu (2005). Truth and provability: A comment on Redhead. British Journal for the Philosophy of Science 56 (3):611-613.   (Cited by 2 | Google | More links)
    Abstract: Michael Redhead's recent argument aiming to show that humanly certifiable truth outruns provability is critically evaluated. It is argued that the argument is at odds with logical facts and fails
    Raatikainen, Panu (ms). Truth and provability again.   (Google)
    Abstract: Lucas and Redhead ([2007]) announce that they will defend the views of Redhead ([2004]) against the argument by Panu Raatikainen ([2005]). They certainly re-state the main claims of Redhead ([2004]), but they do not give any real arguments in their favour, and do not provide anything that would save Redhead’s argument from the serious problems pointed out in (Raatikainen [2005]). Instead, Lucas and Redhead make a number of seemingly irrelevant points, perhaps indicating a failure to understand the logico-mathematical points at issue
    Redhead, M. (2004). Mathematics and the mind. British Journal for the Philosophy of Science 55 (4):731-737.   (Cited by 6 | Google | More links)
    Abstract: Granted that truth is valuable we must recognize that certifiable truth is hard to come by, for example in the natural and social sciences. This paper examines the case of mathematics. As a result of the work of Gödel and Tarski we know that truth does not equate with proof. This has been used by Lucas and Penrose to argue that human minds can do things which digital computers can't, viz to know the truth of unprovable arithmetical statements. The argument is given a simple formulation in the context of sorites (Robinson) arithmetic, avoiding the complexities of formulating the Gödel sentence. The pros and cons of the argument are considered in relation to the conception of mathematical truth. * Paper contributed to the Conference entitled The Place of Value in a World of Facts, held at the LSE in October 2003
    Robinson, William S. (1992). Penrose and mathematical ability. Analysis 52 (2):80-88.   (Annotation | Google)
    Schurz, Gerhard (2002). McCall and Raatikainen on mechanism and incompleteness. Facta Philosophica 4:171-74.   (Google)
    Seager, William E. (2003). Yesterday's algorithm: Penrose and the Godel argument. Croatian Journal of Philosophy 3 (9):265-273.   (Google)
    Abstract: Roger Penrose is justly famous for his work in physics and mathematics but he is _notorious_ for his endorsement of the Gödel argument (see his 1989, 1994, 1997). This argument, first advanced by J. R. Lucas (in 1961), attempts to show that Gödel’s (first) incompleteness theorem can be seen to reveal that the human mind transcends all algorithmic models of it1. Penrose's version of the argument has been seen to fall victim to the original objections raised against Lucas (see Boolos (1990) and for a particularly intemperate review, Putnam (1994)). Yet I believe that more can and should be said about the argument. Only a brief review is necessary here although I wish to present the argument in a somewhat peculiar form
    Slezak, Peter (1983). Descartes's diagonal deduction. British Journal for the Philosophy of Science 34 (March):13-36.   (Cited by 13 | Annotation | Google | More links)
    Slezak, Peter (1982). Godel's theorem and the mind. British Journal for the Philosophy of Science 33 (March):41-52.   (Cited by 13 | Annotation | Google | More links)
    Slezak, Peter (1984). Minds, machines and self-reference. Dialectica 38:17-34.   (Cited by 1 | Google | More links)
    Sloman, Aaron (1986). The emperor's real mind. In A.G. Cohn & J.R. Thomas (eds.), Artificial Intelligence and Its Applications. John Wiley and Sons.   (Google)
    Smart, J. J. C. (1961). Godel's theorem, church's theorem, and mechanism. Synthese 13 (June):105-10.   (Annotation | Google)
    Stone, Tony & Davies, Martin (1998). Folk psychology and mental simulation. Royal Institute of Philosophy Supplement 43:53-82.   (Google | More links)
    Abstract: This paper is about the contemporary debate concerning folk psychology – the debate between the proponents of the theory theory of folk psychology and the friends of the simulation alternative.1 At the outset, we need to ask: What should we mean by this term ‘folk psychology’?
    Tymoczko, Thomas (1991). Why I am not a Turing machine: Godel's theorem and the philosophy of mind. In Jay L. Garfield (ed.), Foundations of Cognitive Science. Paragon House.   (Annotation | Google)
    Wang, H. (1974). From Mathematics to Philosophy. London.   (Cited by 125 | Google)
    Webb, Judson (1968). Metamathematics and the philosophy of mind. Philosophy of Science 35 (June):156-78.   (Cited by 6 | Google | More links)
    Webb, Judson (1980). Mechanism, Mentalism and Metamathematics. Kluwer.   (Cited by 45 | Google)
    Whitely, C. (1962). Minds, machines and Godel: A reply to mr Lucas. Philosophy 37 (January):61-62.   (Annotation | Google)
    Yu, Q. (1992). Consistency, mechanicalness, and the logic of the mind. Synthese 90 (1):145-79.   (Cited by 4 | Google | More links)
    Abstract:   G. Priest's anti-consistency argument (Priest 1979, 1984, 1987) and J. R. Lucas's anti-mechanist argument (Lucas 1961, 1968, 1970, 1984) both appeal to Gödel incompleteness. By way of refuting them, this paper defends the thesis of quartet compatibility, viz., that the logic of the mind can simultaneously be Gödel incomplete, consistent, mechanical, and recursion complete (capable of all means of recursion). A representational approach is pursued, which owes its origin to works by, among others, J. Myhill (1964), P. Benacerraf (1967), J. Webb (1980, 1983) and M. Arbib (1987). It is shown that the fallacy shared by the two arguments under discussion lies in misidentifying two systems, the one for which the Gödel sentence is constructable and to be proved, and the other in which the Gödel sentence in question is indeed provable. It follows that the logic of the mind can surpass its own Gödelian limitation not by being inconsistent or non-mechanistic, but by being capable of representing stronger systems in itself; and so can a proper machine. The concepts of representational provability, representational maximality, formal system capacity, etc., are discussed

    6.1c The Chinese Room

    Adam, Alison (2003). Cyborgs in the chinese room: Boundaries transgressed and boundaries blurred. In John M. Preston & Michael A. Bishop (eds.), Views Into the Chinese Room: New Essays on Searle and Artificial Intelligence. Oxford University Press.   (Google)
    Aleksander, Igor L. (2003). Neural depictions of "world" and "self": Bringing computational understanding into the chinese room. In John M. Preston & Michael A. Bishop (eds.), Views Into the Chinese Room: New Essays on Searle and Artificial Intelligence. Oxford University Press.   (Google)
    Anderson, David (1987). Is the chinese room the real thing? Philosophy 62 (July):389-93.   (Cited by 9 | Google)
    Andrews, Kristin (online). On predicting behavior.   (Google)
    Abstract: I argue that the behavior of other agents is insufficiently described in current debates as a dichotomy between tacit theory (attributing beliefs and desires to predict behavior) and simulation theory (imagining what one would do in similar circumstances in order to predict behavior). I introduce two questions about the foundation and development of our ability both to attribute belief and to simulate it. I then propose that there is one additional method used to predict behavior, namely, an inductive strategy
    Atlas, Jay David, What is it like to be a chinese room?   (Google | More links)
    Abstract: When philosophers think about mental phenomena, they focus on several features of human experience: (1) the existence of consciousness, (2) the intentionality of mental states, that property by which beliefs, desires, anger, etc. are directed at, are about, or refer to objects and states of affairs, (3) subjectivity, characterized by my feeling my pains but not yours, by my experiencing the world and myself from my point of view and not yours, (4) mental causation, that thoughts and feelings have physical effects on the world: I decide to raise my arm and my arm rises. In a world described by theories of physics and chemistry, what place in that physical description do descriptions of the mental have?
    Ben-Yami, Hanoch (1993). A note on the chinese room. Synthese 95 (2):169-72.   (Cited by 3 | Annotation | Google | More links)
    Abstract:   Searle's Chinese Room was supposed to prove that computers can't understand: the man in the room, following, like a computer, syntactical rules alone, though indistinguishable from a genuine Chinese speaker, doesn't understand a word. But such a room is impossible: the man won't be able to respond correctly to questions like What is the time?, even though such an ability is indispensable for a genuine Chinese speaker. Several ways to provide the room with the required ability are considered, and it is concluded that for each of these the room will have understanding. Hence, Searle's argument is invalid
    Block, Ned (2003). Searle's arguments against cognitive science. In John M. Preston & Michael A. Bishop (eds.), Views Into the Chinese Room: New Essays on Searle and Artificial Intelligence. Oxford University Press.   (Cited by 2 | Google)
    Boden, Margaret A. (1988). Escaping from the chinese room. In Computer Models of Mind. Cambridge University Press.   (Cited by 21 | Annotation | Google)
    Bringsjord, Selmer & Noel, Ron (2003). Real robots and the missing thought-experiment in the chinese room dialectic. In John M. Preston & Michael A. Bishop (eds.), Views Into the Chinese Room: New Essays on Searle and Artificial Intelligence. Oxford University Press.   (Google | More links)
    Brown, Steven Ravett (2000). Peirce and formalization of thought: The chinese room argument. Journal of Mind and Behavior.   (Google | More links)
    Abstract: Whether human thinking can be formalized and whether machines can think in a human sense are questions that have been addressed by both Peirce and Searle. Peirce came to roughly the same conclusion as Searle, that the digital computer would not be able to perform human thinking or possess human understanding. However, his rationale and Searle's differ on several important points. Searle approaches the problem from the standpoint of traditional analytic philosophy, where the strict separation of syntax and semantics renders understanding impossible for a purely syntactical device. Peirce disagreed with that analysis, but argued that the computer would only be able to achieve algorithmic thinking, which he considered the simplest type. Although their approaches were radically dissimilar, their conclusions were not. I will compare and analyze the arguments of both Peirce and Searle on this issue, and outline some implications of their conclusions for the field of Artificial Intelligence
    Button, Graham; Coutler, Jeff & Lee, John R. E. (2000). Re-entering the chinese room: A reply to Gottfried and Traiger. Minds and Machines 10 (1):145-148.   (Google | More links)
    Bynum, Terrell Ward (1985). Artificial intelligence, biology, and intentional states. Metaphilosophy 16 (October):355-77.   (Cited by 9 | Annotation | Google | More links)
    Cam, Philip (1990). Searle on strong AI. Australasian Journal of Philosophy 68 (1):103-8.   (Cited by 2 | Annotation | Google | More links)
    Carleton, Lawrence Richard (1984). Programs, language understanding, and Searle. Synthese 59 (May):219-30.   (Cited by 8 | Annotation | Google | More links)
    Chalmers, David J. (1992). Subsymbolic computation and the chinese room. In J. Dinsmore (ed.), The Symbolic and Connectionist Paradigms: Closing the Gap. Lawrence Erlbaum.   (Cited by 29 | Annotation | Google | More links)
    Abstract: More than a decade ago, philosopher John Searle started a long-running controversy with his paper “Minds, Brains, and Programs” (Searle, 1980a), an attack on the ambitious claims of artificial intelligence (AI). With his now famous _Chinese Room_ argument, Searle claimed to show that despite the best efforts of AI researchers, a computer could never recreate such vital properties of human mentality as intentionality, subjectivity, and understanding. The AI research program is based on the underlying assumption that all important aspects of human cognition may in principle be captured in a computational model. This assumption stems from the belief that beyond a certain level, implementational details are irrelevant to cognition. According to this belief, neurons, and biological wetware in general, have no preferred status as the substrate for a mind. As it happens, the best examples of minds we have at present have arisen from a carbon-based substrate, but this is due to constraints of evolution and possibly historical accidents, rather than to an absolute metaphysical necessity. As a result of this belief, many cognitive scientists have chosen to focus not on the biological substrate of the mind, but instead on the abstract causal structure_ _that the mind embodies (at an appropriate level of abstraction). The view that it is abstract causal structure that is essential to mentality has been an implicit assumption of the AI research program since Turing (1950), but was first articulated explicitly, in various forms, by Putnam (1960), Armstrong (1970) and Lewis (1970), and has become known as _functionalism_. From here, it is a very short step to _computationalism_, the view that computational structure is what is important in capturing the essence of mentality. This step follows from a belief that any abstract causal structure can be captured computationally: a belief made plausible by the Church–Turing Thesis, which articulates the power
    Churchland, Paul M. & Churchland, Patricia S. (1990). Could a machine think? Scientific American 262 (1):32-37.   (Cited by 102 | Annotation | Google | More links)
    Cohen, L. Jonathan (1986). What sorts of machines can understand the symbols they use? Proceedings of the Aristotelian Society 60:81-96.   (Google)
    Cole, David J. (1991). Artificial intelligence and personal identity. Synthese 88 (September):399-417.   (Cited by 18 | Annotation | Google | More links)
    Abstract:   Considerations of personal identity bear on John Searle's Chinese Room argument, and on the opposed position that a computer itself could really understand a natural language. In this paper I develop the notion of a virtual person, modelled on the concept of virtual machines familiar in computer science. I show how Searle's argument, and J. Maloney's attempt to defend it, fail. I conclude that Searle is correct in holding that no digital machine could understand language, but wrong in holding that artificial minds are impossible: minds and persons are not the same as the machines, biological or electronic, that realize them
    Cole, David J. (1991). Artificial minds: Cam on Searle. Australasian Journal of Philosophy 69 (September):329-33.   (Cited by 3 | Google | More links)
    Cole, David J. (1984). Thought and thought experiments. Philosophical Studies 45 (May):431-44.   (Cited by 15 | Annotation | Google | More links)
    Cole, David J. (1994). The causal powers of CPUs. In Eric Dietrich (ed.), Thinking Computers and Virtual Persons. Academic Press.   (Cited by 2 | Google)
    Cole, David (online). The chinese room argument. Stanford Encyclopedia of Philosophy.   (Google)
    Copeland, B. Jack (1993). The curious case of the chinese gym. Synthese 95 (2):173-86.   (Cited by 12 | Annotation | Google | More links)
    Abstract:   Searle has recently used two adaptations of his Chinese room argument in an attack on connectionism. I show that these new forms of the argument are fallacious. First I give an exposition of and rebuttal to the original Chinese room argument, and then a brief introduction to the essentials of connectionism
    Copeland, B. Jack (2003). The chinese room from a logical point of view. In John M. Preston & Michael A. Bishop (eds.), Views Into the Chinese Room: New Essays on Searle and Artificial Intelligence. Oxford University Press.   (Cited by 5 | Google)
    Coulter, Jeff & Sharrock, S. (2003). The hinterland of the chinese room. In John M. Preston & Michael A. Bishop (eds.), Views Into the Chinese Room: New Essays on Searle and Artificial Intelligence. Oxford University Press.   (Google)
    Cutrona, Jr (ms). Zombies in Searle's chinese room: Putting the Turing test to bed.   (Google | More links)
    Abstract: Searle’s discussions over the years 1980-2004 of the implications of his “Chinese Room” Gedanken experiment are frustrating because they proceed from a correct assertion: (1) “Instantiating a computer program is never by itself a sufficient condition of intentionality;” and an incorrect assertion: (2) “The explanation of how the brain produces intentionality cannot be that it does it by instantiating a computer program.” In this article, I describe how to construct a Gedanken zombie Chinese Room program that will pass the Turing test and at the same time unambiguously demonstrates the correctness of (1). I then describe how to construct a Gedanken Chinese brain program that will pass the Turing test, has a mind, and understands Chinese, thus demonstrating that (2) is incorrect. Searle’s instantiation of this program can and does produce intentionality. Searle’s longstanding ignorance of Chinese is simply irrelevant and always has been. I propose a truce and a plan for further exploration
    Damper, Robert I. (2004). The chinese room argument--dead but not yet buried. Journal of Consciousness Studies 11 (5-6):159-169.   (Cited by 2 | Google | More links)
    Damper, Robert I. (2006). The logic of Searle's chinese room argument. Minds and Machines 16 (2):163-183.   (Google | More links)
    Abstract: John Searle’s Chinese room argument (CRA) is a celebrated thought experiment designed to refute the hypothesis, popular among artificial intelligence (AI) scientists and philosophers of mind, that “the appropriately programmed computer really is a mind”. Since its publication in 1980, the CRA has evoked an enormous amount of debate about its implications for machine intelligence, the functionalist philosophy of mind, theories of consciousness, etc. Although the general consensus among commentators is that the CRA is flawed, and not withstanding the popularity of the systems reply in some quarters, there is remarkably little agreement on exactly how and why it is flawed. A newcomer to the controversy could be forgiven for thinking that the bewildering collection of diverse replies to Searle betrays a tendency to unprincipled, ad hoc argumentation and, thereby, a weakness in the opposition’s case. In this paper, treating the CRA as a prototypical example of a ‘destructive’ thought experiment, I attempt to set it in a logical framework (due to Sorensen), which allows us to systematise and classify the various objections. Since thought experiments are always posed in narrative form, formal logic by itself cannot fully capture the controversy. On the contrary, much also hinges on how one translates between the informal everyday language in which the CRA was initially framed and formal logic and, in particular, on the specific conception(s) of possibility that one reads into the logical formalism
    Dennett, Daniel C. (1987). Fast thinking. In The Intentional Stance. MIT Press.   (Cited by 12 | Annotation | Google)
    Double, Richard (1984). Reply to C.A. Field's Double on Searle's Chinese Room. Nature and System 6 (March):55-58.   (Google)
    Double, Richard (1983). Searle, programs and functionalism. Nature and System 5 (March-June):107-14.   (Cited by 3 | Annotation | Google)
    Dyer, Michael G. (1990). Finding lost minds. Journal of Experimental and Theoretical Artificial Intelligence 2:329-39.   (Cited by 3 | Annotation | Google | More links)
    Dyer, Michael G. (1990). Intentionality and computationalism: Minds, machines, Searle and Harnad. Journal of Experimental and Theoretical Artificial Intelligence 2:303-19.   (Cited by 23 | Annotation | Google | More links)
    Fields, Christopher A. (1984). Double on Searle's chinese room. Nature and System 6 (March):51-54.   (Annotation | Google)
    Fisher, Justin C. (1988). The wrong stuff: Chinese rooms and the nature of understanding. Philosophical Investigations 11 (October):279-99.   (Cited by 2 | Google)
    Fodor, Jerry A. (1991). Yin and Yang in the chinese room. In D. Rosenthal (ed.), The Nature of Mind. Oxford University Press.   (Cited by 5 | Annotation | Google)
    Fulda, Joseph S. (2006). A Plea for Automated Language-to-Logical-Form Converters. RASK: Internationalt tidsskrift for sprog og kommuinkation 24 (--):87-102.   (Google)
    Millikan, Ruth G. (2005). Some reflections on the theory theory - simulation theory discussion. In Susan Hurley & Nick Chater (eds.), Perspectives on Imitation: From Mirror Neurons to Memes, Vol II. MIT Press.   (Google)
    Globus, Gordon G. (1991). Deconstructing the chinese room. Journal of Mind and Behavior 12 (3):377-91.   (Cited by 4 | Google)
    Gozzano, Simone (1995). Consciousness and understanding in the chinese room. Informatica 19:653-56.   (Cited by 1 | Google)
    Abstract: In this paper I submit that the “Chinese room” argument rests on the assumption that understanding a sentence necessarily implies being conscious of its content. However, this assumption can be challenged by showing that two notions of consciousness come into play, one to be found in AI, the other in Searle’s argument, and that the former is an essential condition for the notion used by Searle. If Searle discards the first, he not only has trouble explaining how we can learn a language but finds the validity of his own argument in jeopardy
    Gozzano, Simone (1997). The chinese room argument: Consciousness and understanding. In Matjaz Gams, M. Paprzycki & X. Wu (eds.), Mind Versus Computer: Were Dreyfus and Winograd Right? Amsterdam: IOS Press.   (Google | More links)
    Hanna, Patricia (1985). Causal powers and cognition. Mind 94 (373):53-63.   (Cited by 2 | Annotation | Google | More links)
    Harrison, David (1997). Connectionism hits the chinese gym. Connexions 1.   (Google)
    Harnad, Stevan (1990). Lost in the hermeneutic hall of mirrors. [Journal (Paginated)] 2:321-27.   (Annotation | Google | More links)
    Abstract: Critique of Computationalism as merely projecting hermeneutics (i.e., meaning originating from the mind of an external interpreter) onto otherwise intrinsically meaningless symbols. Projecting an interpretation onto a symbol system results in its being reflected back, in a spuriously self-confirming way
    Harnad, Stevan (1989). Minds, machines and Searle. Journal of Experimental and Theoretical Artificial Intelligence 1 (4):5-25.   (Cited by 113 | Annotation | Google | More links)
    Abstract: Searle's celebrated Chinese Room Argument has shaken the foundations of Artificial Intelligence. Many refutations have been attempted, but none seem convincing. This paper is an attempt to sort out explicitly the assumptions and the logical, methodological and empirical points of disagreement. Searle is shown to have underestimated some features of computer modeling, but the heart of the issue turns out to be an empirical question about the scope and limits of the purely symbolic (computational) model of the mind. Nonsymbolic modeling turns out to be immune to the Chinese Room Argument. The issues discussed include the Total Turing Test, modularity, neural modeling, robotics, causality and the symbol-grounding problem
    Harnad, Stevan (2003). Minds, machines, and Searle 2: What's right and wrong about the chinese room argument. In John M. Preston & Michael A. Bishop (eds.), Views Into the Chinese Room: New Essays on Searle and Artificial Intelligence. Oxford University Press.   (Cited by 4 | Google | More links)
    Abstract: When in 1979 Zenon Pylyshyn, associate editor of Behavioral and Brain Sciences (BBS, a peer commentary journal which I edit) informed me that he had secured a paper by John Searle with the unprepossessing title of [XXXX], I cannot say that I was especially impressed; nor did a quick reading of the brief manuscript -- which seemed to be yet another tedious "Granny Objection"[1] about why/how we are not computers -- do anything to upgrade that impression
    Harnad, Stevan (2001). Rights and wrongs of Searle's chinese room argument. In M. Bishop & J. Preston (eds.), Essays on Searle's Chinese Room Argument. Oxford University Press.   (Google | More links)
    Abstract: "in an academic generation a little overaddicted to "politesse," it may be worth saying that violent destruction is not necessarily worthless and futile. Even though it leaves doubt about the right road for London, it helps if someone rips up, however violently, a
    Harnad, Stevan, Searle's chinese room argument.   (Google)
    Abstract: Computationalism. According to computationalism, to explain how the mind works, cognitive science needs to find out what the right computations are -- the same ones that the brain performs in order to generate the mind and its capacities. Once we know that, then every system that performs those computations will have those mental states: Every computer that runs the mind's program will have a mind, because computation is hardware independent : Any hardware that is running the right program has the right computational states
    Harnad, Stevan (2001). What's wrong and right about Searle's chinese room argument? In Michael A. Bishop & John M. Preston (eds.), [Book Chapter] (in Press). Oxford University Press.   (Cited by 1 | Google | More links)
    Abstract: Searle's Chinese Room Argument showed a fatal flaw in computationalism (the idea that mental states are just computational states) and helped usher in the era of situated robotics and symbol grounding (although Searle himself thought neuroscience was the only correct way to understand the mind)
    Hauser, Larry (online). Chinese room argument. Internet Encyclopedia of Philosophy.   (Google)
    Hauser, Larry (2003). Nixin' goes to china. In John M. Preston & Michael A. Bishop (eds.), Views Into the Chinese Room: New Essays on Searle and Artificial Intelligence. Oxford University Press.   (Cited by 3 | Google)
    Abstract: The intelligent-seeming deeds of computers are what occasion philosophical debate about artificial intelligence (AI) in the first place. Since evidence of AI is not bad, arguments against seem called for. John Searle's Chinese Room Argument (1980a, 1984, 1990, 1994) is among the most famous and long-running would-be answers to the call. Surprisingly, both the original thought experiment (1980a) and Searle's later would-be formalizations of the embedding argument (1984, 1990) are quite unavailing against AI proper (claims that computers do or someday will think ). Searle lately even styles it a "misunderstanding" (1994, p. 547) to think the argument was ever so directed! The Chinese room is now advertised to target Computationalism (claims that computation is what thought essentially is ) exclusively. Despite its renown, the Chinese Room Argument is totally ineffective even against this target
    Hauser, Larry (1993). Searle's Chinese Box: The Chinese Room Argument and Artificial Intelligence. Dissertation, University of Michigan   (Cited by 11 | Google)
    Hauser, Larry (1997). Searle's chinese box: Debunking the chinese room argument. [Journal (Paginated)] 7 (2):199-226.   (Cited by 17 | Google | More links)
    Abstract: John Searle's Chinese room argument is perhaps the most influential and widely cited argument against artificial intelligence (AI). Understood as targeting AI proper -- claims that computers can think or do think -- Searle's argument, despite its rhetorical flash, is logically and scientifically a dud. Advertised as effective against AI proper, the argument, in its main outlines, is an ignoratio elenchi. It musters persuasive force fallaciously by indirection fostered by equivocal deployment of the phrase "strong AI" and reinforced by equivocation on the phrase "causal powers (at least) equal to those of brains." On a more carefully crafted understanding -- understood just to target metaphysical identification of thought with computation ("Functionalism" or "Computationalism") and not AI proper -- the argument is still unsound, though more interestingly so. It's unsound in ways difficult for high church -- "someday my prince of an AI program will come" -- believers in AI to acknowledge without undermining their high church beliefs. The ad hominem bite of Searle's argument against the high church persuasions of so many cognitive scientists, I suggest, largely explains the undeserved repute this really quite disreputable argument enjoys among them
    Hauser, Larry (online). Searle's chinese room argument. Field Guide to the Philosophy of Mind.   (Google)
    Abstract: John Searle's 1980a) thought experiment and associated 1984a) argument is one of the best known and widely credited counters to claims of artificial intelligence (AI), i.e., to claims that computers _do_ or at least _can_ (roughly, someday will) think. According to Searle's original presentation, the argument is based on two truths: _brains cause minds_ , and _syntax doesn't suffice_ _for semantics_ . Its target, Searle dubs "strong AI": "according to strong AI," according to Searle, "the computer is not merely a tool in the study of the mind, rather the appropriately programmed computer really _is_ a mind in the sense that computers given the right programs can be literally said to _understand_ and have other cognitive states" 1980a, p. 417). Searle contrasts "strong AI" to "weak AI". According to weak AI, according to Searle, computers just
    Hauser, Larry (online). The chinese room argument.   (Cited by 6 | Google)
    Abstract: _The Chinese room argument_ - John Searle's (1980a) thought experiment and associated (1984) derivation - is one of the best known and widely credited counters to claims of artificial intelligence (AI), i.e., to claims that computers _do_ or at least _can_ (someday might) think. According to Searle's original presentation, the argument is based on two truths: _brains cause minds_ , and _syntax doesn't_ _suffice for semantics_ . Its target, Searle dubs "strong AI": "according to strong AI," according to Searle, "the computer is not merely a tool in the study of the mind, rather the appropriately programmed computer really _is_ a mind in the sense that computers given the right programs can be literally said to _understand_ and have other cognitive states" (1980a, p. 417). Searle contrasts "strong AI" to "weak AI". According to weak AI, according to Searle, computers just
    Hayes, Patrick; Harnad, Stevan; Perlis, Donald R. & Block, Ned (1992). Virtual symposium on virtual mind. [Journal (Paginated)] 2 (3):217-238.   (Cited by 21 | Annotation | Google | More links)
    Abstract: When certain formal symbol systems (e.g., computer programs) are implemented as dynamic physical symbol systems (e.g., when they are run on a computer) their activity can be interpreted at higher levels (e.g., binary code can be interpreted as LISP, LISP code can be interpreted as English, and English can be interpreted as a meaningful conversation). These higher levels of interpretability are called "virtual" systems. If such a virtual system is interpretable as if it had a mind, is such a "virtual mind" real? This is the question addressed in this "virtual" symposium, originally conducted electronically among four cognitive scientists: Donald Perlis, a computer scientist, argues that according to the computationalist thesis, virtual minds are real and hence Searle's Chinese Room Argument fails, because if Searle memorized and executed a program that could pass the Turing Test in Chinese he would have a second, virtual, Chinese-understanding mind of which he was unaware (as in multiple personality). Stevan Harnad, a psychologist, argues that Searle's Argument is valid, virtual minds are just hermeneutic overinterpretations, and symbols must be grounded in the real world of objects, not just the virtual world of interpretations. Computer scientist Patrick Hayes argues that Searle's Argument fails, but because Searle does not really implement the program: A real implementation must not be homuncular but mindless and mechanical, like a computer. Only then can it give rise to a mind at the virtual level. Philosopher Ned Block suggests that there is no reason a mindful implementation would not be a real one
    Hofstadter, Douglas R. (1981). Reflections on Searle. In Douglas R. Hofstadter & Daniel C. Dennett (eds.), The Mind's I. Basic Books.   (Cited by 1 | Annotation | Google)
    Jacquette, Dale (1989). Adventures in the chinese room. Philosophy and Phenomenological Research 49 (June):605-23.   (Cited by 5 | Annotation | Google | More links)
    Jacquette, Dale (1990). Fear and loathing (and other intentional states) in Searle's chinese room. Philosophical Psychology 3 (2 & 3):287-304.   (Annotation | Google)
    Abstract: John R. Searle's problem of the Chinese Room poses an important philosophical challenge to the foundations of strong artificial intelligence, and functionalist, cognitivist, and computationalist theories of mind. Searle has recently responded to three categories of criticisms of the Chinese Room and the consequences he attempts to conclude from it, redescribing the essential features of the problem, and offering new arguments about the syntax-semantics gap it is intended to demonstrate. Despite Searle's defense, the Chinese Room remains ineffective as a counterexample, and poses no real threat to artificial intelligence or mechanist philosophy of mind. The thesis that intentionality is a primitive irreducible relation exemplified by biological phenomena is preferred in opposition to Searle's contrary claim that intentionality is a biological phenomenon exhibiting abstract properties
    Jacquette, Dale (1989). Searle's intentionality thesis. Synthese 80 (August):267-75.   (Cited by 1 | Annotation | Google | More links)
    Jahren, Neal (1990). Can semantics be syntactic? Synthese 82 (3):309-28.   (Cited by 3 | Annotation | Google | More links)
    Abstract:   The author defends John R. Searle's Chinese Room argument against a particular objection made by William J. Rapaport called the Korean Room. Foundational issues such as the relationship of strong AI to human mentality and the adequacy of the Turing Test are discussed. Through undertaking a Gedankenexperiment similar to Searle's but which meets new specifications given by Rapaport for an AI system, the author argues that Rapaport's objection to Searle does not stand and that Rapaport's arguments seem convincing only because they assume the foundations of strong AI at the outset
    Kaernbach, C. (2005). No virtual mind in the chinese room. Journal of Consciousness Studies 12 (11):31-42.   (Google | More links)
    Kentridge, Robert W. (2001). Computation, chaos and non-deterministic symbolic computation: The chinese room problem solved? Psycoloquy 12 (50).   (Cited by 6 | Google | More links)
    King, D. (2001). Entering the chinese room with Castaneda's principle (p). Philosophy Today 45 (2):168-174.   (Google)
    Kober, Michael (1998). Kripkenstein meets the chinese room: Looking for the place of meaning from a natural point of view. Inquiry 41 (3):317-332.   (Cited by 2 | Google | More links)
    Abstract: The discussion between Searle and the Churchlands over whether or not symbolmanipulating computers generate semantics will be confronted both with the rulesceptical considerations of Kripke/Wittgenstein and with Wittgenstein's privatelanguage argument in order to show that the discussion focuses on the wrong place: meaning does not emerge in the brain. That a symbol means something should rather be conceived as a social fact, depending on a mutual imputation of linguistic competence of the participants of a linguistic practice to one another. The alternative picture will finally be applied to small children, animals, and computers as well
    Korb, Kevin B. (1991). Searle's AI program. Journal of Experimental and Theoretical Artificial Intelligence 3:283-96.   (Cited by 6 | Annotation | Google | More links)
    Kugel, Peter (2004). The chinese room is a trick. Behavioral and Brain Sciences 27 (1):153-154.   (Google)
    Abstract: To convince us that computers cannot have mental states, Searle (1980) imagines a “Chinese room” that simulates a computer that “speaks” Chinese and asks us to find the understanding in the room. It's a trick. There is no understanding in the room, not because computers can't have it, but because the room's computer-simulation is defective. Fix it and understanding appears. Abracadabra!
    Law, Diane (online). Searle, subsymbolic functionalism, and synthetic intelligence.   (Cited by 1 | Google | More links)
    Leslie, Alan M. & Scholl, Brian J. (1999). Modularity, development and 'theory of mind'. Mind and Language 14 (1).   (Google | More links)
    Abstract: Psychologists and philosophers have recently been exploring whether the mechanisms which underlie the acquisition of ‘theory of mind’ (ToM) are best charac- terized as cognitive modules or as developing theories. In this paper, we attempt to clarify what a modular account of ToM entails, and why it is an attractive type of explanation. Intuitions and arguments in this debate often turn on the role of develop- ment: traditional research on ToM focuses on various developmental sequences, whereas cognitive modules are thought to be static and ‘anti-developmental’. We suggest that this mistaken view relies on an overly limited notion of modularity, and we explore how ToM might be grounded in a cognitive module and yet still afford development. Modules must ‘come on-line’, and even fully developed modules may still develop internally, based on their constrained input. We make these points con- crete by focusing on a recent proposal to capture the development of ToM in a module via parameterization
    Maloney, J. Christopher (1987). The right stuff. Synthese 70 (March):349-72.   (Cited by 13 | Annotation | Google | More links)
    McCarthy, John (online). John Searle's chinese room argument.   (Google)
    Abstract: John Searle begins his (1990) ``Consciousness, Explanatory Inversion and Cognitive Science'' with
    ``Ten years ago in this journal I published an article (Searle, 1980a and 1980b) criticising what I call Strong
    AI, the view that for a system to have mental states it is sufficient for the system to implement the right sort of
    program with right inputs and outputs. Strong AI is rather easy to refute and the basic argument can be
    summarized in one sentence: {it a system, me for example, could implement a program for understanding
    Chinese, for example, without understanding any Chinese at all.} This idea, when developed, became
    known as the Chinese Room Argument.''
    The Chinese Room Argument can be refuted in one sentence
    Melnyk, Andrew (1996). Searle's abstract argument against strong AI. Synthese 108 (3):391-419.   (Cited by 6 | Google | More links)
    Abstract:   Discussion of Searle's case against strong AI has usually focused upon his Chinese Room thought-experiment. In this paper, however, I expound and then try to refute what I call his abstract argument against strong AI, an argument which turns upon quite general considerations concerning programs, syntax, and semantics, and which seems not to depend on intuitions about the Chinese Room. I claim that this argument fails, since it assumes one particular account of what a program is. I suggest an alternative account which, however, cannot play a role in a Searle-type argument, and argue that Searle gives no good reason for favoring his account, which allows the abstract argument to work, over the alternative, which doesn't. This response to Searle's abstract argument also, incidentally, enables the Robot Reply to the Chinese Room to defend itself against objections Searle makes to it
    Mitchell, Ethan (2008). The real Chinese Room. Philica 125.   (Google)
    Moor, James H. (1988). The pseudorealization fallacy and the chinese room argument. In James H. Fetzer (ed.), Aspects of AI. D.   (Cited by 5 | Annotation | Google)
    Moural, Josef (2003). The chinese room argument. In John Searle. Cambridge: Cambridge University Press.   (Cited by 2 | Google)
    Narayanan, Ajit (1991). The chinese room argument. In Logical Foundations. New York: St Martin's Press.   (Google)
    Newton, Natika (1989). Machine understanding and the chinese room. Philosophical Psychology 2 (2):207-15.   (Cited by 2 | Annotation | Google)
    Abstract: John Searle has argued that one can imagine embodying a machine running any computer program without understanding the symbols, and hence that purely computational processes do not yield understanding. The disagreement this argument has generated stems, I hold, from ambiguity in talk of 'understanding'. The concept is analysed as a relation between subjects and symbols having two components: a formal and an intentional. The central question, then becomes whether a machine could possess the intentional component with or without the formal component. I argue that the intentional state of a symbol's being meaningful to a subject is a functionally definable relation between the symbol and certain past and present states of the subject, and that a machine could bear this relation to a symbol. I sketch a machine which could be said to possess, in primitive form, the intentional component of understanding. Even if the machine, in lacking consciousness, lacks full understanding, it contributes to a theory of understanding and constitutes a counterexample to the Chinese Room argument
    Obermeier, K. K. (1983). Wittgenstein on language and artificial intelligence: The chinese-room thought-experiment revisited. Synthese 56 (September):339-50.   (Cited by 1 | Google | More links)
    Penrose, Roger (2003). Consciousness, computation, and the chinese room. In John M. Preston & Michael A. Bishop (eds.), Views Into the Chinese Room: New Essays on Searle and Artificial Intelligence. Oxford University Press.   (Cited by 2 | Google)
    Pfeifer, Karl (1992). Searle, strong, and two ways of sorting cucumbers. Journal of Philosophical Research 17:347-50.   (Cited by 1 | Google)
    Preston, John M. & Bishop, Michael A. (eds.) (2002). Views Into the Chinese Room: New Essays on Searle and Artificial Intelligence. Oxford University Press.   (Cited by 21 | Google)
    Abstract: The most famous challenge to computational cognitive science and artificial intelligence is the philosopher John Searle's "Chinese Room" argument.
    Proudfoot, Diane (2003). Wittgenstein's anticipation of the chinese room. In John M. Preston & Michael A. Bishop (eds.), Views Into the Chinese Room: New Essays on Searle and Artificial Intelligence. Oxford University Press.   (Google)
    Rapaport, William J. (2006). How Helen Keller used syntactic semantics to escape from a chinese room. Minds and Machines 16 (4).   (Google | More links)
    Abstract:   A computer can come to understand natural language the same way Helen Keller did: by using “syntactic semantics”—a theory of how syntax can suffice for semantics, i.e., how semantics for natural language can be provided by means of computational symbol manipulation. This essay considers real-life approximations of Chinese Rooms, focusing on Helen Keller’s experiences growing up deaf and blind, locked in a sort of Chinese Room yet learning how to communicate with the outside world. Using the SNePS computational knowledge-representation system, the essay analyzes Keller’s belief that learning that “everything has a name” was the key to her success, enabling her to “partition” her mental concepts into mental representations of: words, objects, and the naming relations between them. It next looks at Herbert Terrace’s theory of naming, which is akin to Keller’s, and which only humans are supposed to be capable of. The essay suggests that computers at least, and perhaps non-human primates, are also capable of this kind of naming
    Rapaport, William J. (1984). Searle's experiments with thought. Philosophy of Science 53 (June):271-9.   (Cited by 14 | Annotation | Google | More links)
    Rey, Georges (2003). Searle's misunderstandings of functionalism and strong AI. In John M. Preston & Michael A. Bishop (eds.), Views Into the Chinese Room: New Essays on Searle and Artificial Intelligence. Oxford University Press.   (Google)
    Rey, Georges (1986). What's really going on in Searle's 'chinese room'. Philosophical Studies 50 (September):169-85.   (Cited by 17 | Annotation | Google | More links)
    Roberts, Lawrence D. (1990). Searle's extension of the chinese room to connectionist machines. Journal of Experimental and Theoretical Artificial Intelligence 2:185-7.   (Cited by 4 | Annotation | Google)
    Rodych, Victor (2003). Searle Freed of every flaw. Acta Analytica 18 (30-31):161-175.   (Google | More links)
    Abstract: Strong Al presupposes (1) that Super-Searle (henceforth ‘Searle’) comes to know that the symbols he manipulates are meaningful , and (2) that there cannot be two or more semantical interpretations for the system of symbols that Searle manipulates such that the set of rules constitutes a language comprehension program for each interpretation. In this paper, I show that Strong Al is false and that presupposition #1 is false, on the assumption that presupposition #2 is true. The main argument of the paper constructs a second program, isomorphic to Searle’s, to show that if someone, say Dan, runs this isomorphic program, he cannot possibly come to know what its mentioned symbols mean because they do not mean anything to anybody. Since Dan and Searle do exactly the same thing, except that the symbols they manipulate are different, neither Dan nor Searle can possibly know whether the symbols they manipulate are meaningful (let alone what they mean, if they are meaningful). The remainder of the paper responds to an anticipated Strong Al rejoinder, which, I believe, is a necessary extension of Strong Al
    Russow, L-M. (1984). Unlocking the chinese room. Nature and System 6 (December):221-8.   (Cited by 4 | Annotation | Google)
    Searle, John R. (1990). Is the brain's mind a computer program? Scientific American 262 (1):26-31.   (Cited by 178 | Annotation | Google | More links)
    Searle, John R. (1987). Minds and brains without programs. In Colin Blakemore (ed.), Mindwaves. Blackwell.   (Cited by 27 | Annotation | Google)
    Searle, John R. (1980). Minds, brains and programs. Behavioral and Brain Sciences 3:417-57.   (Cited by 1532 | Annotation | Google | More links)
    Abstract: What psychological and philosophical significance should we attach to recent efforts at computer simulations of human cognitive capacities? In answering this question, I find it useful to distinguish what I will call "strong" AI from "weak" or "cautious" AI (artificial intelligence). According to weak AI, the principal value of the computer in the study of the mind is that it gives us a very powerful tool. For example, it enables us to formulate and test hypotheses in a more rigorous and precise fashion. But according to strong AI, the computer is not merely a tool in the study of the mind; rather, the appropriately programmed computer really is a mind, in the sense that computers given the right programs can be literally said to..
    Searle, John R. (1984). Minds, Brains and Science. Harvard University Press.   (Cited by 515 | Annotation | Google)
    Searle, John R. (1989). Reply to Jacquette. Philosophy and Phenomenological Research 49 (4):701-8.   (Cited by 4 | Annotation | Google | More links)
    Searle, John R. (1989). Reply to Jacquette's adventures in the chinese room. Philosophy and Phenomenological Research 49 (June):701-707.   (Google)
    Searle, John R. (2002). Twenty-one years in the chinese room. In John M. Preston & Michael A. Bishop (eds.), Views Into the Chinese Room: New Essays on Searle and Artificial Intelligence. Oxford University Press.   (Cited by 7 | Google)
    Seidel, Asher (1989). Chinese rooms a, B and C. Pacific Philosophical Quarterly 20 (June):167-73.   (Cited by 1 | Annotation | Google)
    Seidel, Asher (1988). Searle on the biological basis of cognition. Analysis 48 (January):26-28.   (Google)
    Shaffer, Michael J. (2009). A logical hole in the chinese room. Minds and Machines 19 (2):229-235.   (Google)
    Abstract: Searle’s Chinese Room Argument (CRA) has been the object of great interest in the philosophy of mind, artificial intelligence and cognitive science since its initial presentation in ‘Minds, Brains and Programs’ in 1980. It is by no means an overstatement to assert that it has been a main focus of attention for philosophers and computer scientists of many stripes. It is then especially interesting to note that relatively little has been said about the detailed logic of the argument, whatever significance Searle intended CRA to have. The problem with the CRA is that it involves a very strong modal claim, the truth of which is both unproved and highly questionable. So it will be argued here that the CRA does not prove what it was intended to prove
    Sharvy, Richard (1985). Searle on programs and intentionality. Canadian Journal of Philosophy 11:39-54.   (Annotation | Google)
    Simon, Herbert A. & Eisenstadt, Stuart A. (2003). A chinese room that understands. In John M. Preston & Michael A. Bishop (eds.), Views Into the Chinese Room: New Essays on Searle and Artificial Intelligence. Oxford University Press.   (Cited by 3 | Google)
    Sloman, Aaron (1986). Did Searle attack strong strong AI or weak strong AI? In Artificial Intelligence and its Applications. Chichester.   (Cited by 3 | Google | More links)
    Sprevak, Mark D. (online). Algorithms and the chinese room.   (Google)
    Suits, David B. (1989). Out of the chinese room. Computing and Philosophy Newsletter 4:1-7.   (Cited by 2 | Annotation | Google)
    Tanaka, Koji (2004). Minds, programs, and chinese philosophers: A chinese perspective on the chinese room. Sophia 43 (1):61-72.   (Google)
    Abstract: The paper is concerned with John Searle’s famous Chinese room argument. Despite being objected to by some, Searle’s Chinese room argument appears very appealing. This is because Searle’s argument is based on an intuition about the mind that ‘we’ all seem to share. Ironically, however, Chinese philosophers don’t seem to share this same intuition. The paper begins by first analysing Searle’s Chinee room argument. It then introduces what can be seen as the (implicit) Chinese view of the mind. Lastly, it demonstrates a conceptual difference between Chinese and Western philosophy with respect to the notion of mind. Thus, it is shown that one must carefully attend to the presuppositions underlying Chinese philosophising in interpreting Chinese philosophers
    Taylor, John G. (2003). Do virtual actions avoid the chinese room? In John M. Preston & Michael A. Bishop (eds.), Views Into the Chinese Room: New Essays on Searle and Artificial Intelligence. Oxford University Press.   (Cited by 4 | Google)
    Teng, Norman Y. (2000). A cognitive analysis of the chinese room argument. Philosophical Psychology 13 (3):313-24.   (Cited by 1 | Google | More links)
    Abstract: Searle's Chinese room argument is analyzed from a cognitive point of view. The analysis is based on a newly developed model of conceptual integration, the many space model proposed by Fauconnier and Turner. The main point of the analysis is that the central inference constructed in the Chinese room scenario is a result of a dynamic, cognitive activity of conceptual blending, with metaphor defining the basic features of the blending. Two important consequences follow: (1) Searle's recent contention that syntax is not intrinsic to physics turns out to be a slightly modified version of the old Chinese room argument; and (2) the argument itself is still open to debate. It is persuasive but not conclusive, and at bottom it is a topological mismatch in the metaphoric conceptual integration that is responsible for the non-conclusive character of the Chinese room argument
    Thagard, Paul R. (1986). The emergence of meaning: An escape from Searle's chinese room. Behaviorism 14 (3):139-46.   (Cited by 5 | Annotation | Google)
    Wakefield, Jerome C. (2003). The chinese room argument reconsidered: Essentialism, indeterminacy, and strong AI. Minds and Machines 13 (2):285-319.   (Cited by 3 | Google | More links)
    Abstract:   I argue that John Searle's (1980) influential Chinese room argument (CRA) against computationalism and strong AI survives existing objections, including Block's (1998) internalized systems reply, Fodor's (1991b) deviant causal chain reply, and Hauser's (1997) unconscious content reply. However, a new ``essentialist'' reply I construct shows that the CRA as presented by Searle is an unsound argument that relies on a question-begging appeal to intuition. My diagnosis of the CRA relies on an interpretation of computationalism as a scientific theory about the essential nature of intentional content; such theories often yield non-intuitive results in non-standard cases, and so cannot be judged by such intuitions. However, I further argue that the CRA can be transformed into a potentially valid argument against computationalism simply by reinterpreting it as an indeterminacy argument that shows that computationalism cannot explain the ordinary distinction between semantic content and sheer syntactic manipulation, and thus cannot be an adequate account of content. This conclusion admittedly rests on the arguable but plausible assumption that thought content is interestingly determinate. I conclude that the viability of computationalism and strong AI depends on their addressing the indeterminacy objection, but that it is currently unclear how this objection can be successfully addressed
    Warwick, Kevin (2002). Alien encounters. In Views Into the Chinese Room: New Essays on Searle and Artificial Intelligence. Oxford: Clarendon Press.   (Google)
    Weiss, Timothy (1990). Closing the chinese room. Ratio 3 (2):165-81.   (Cited by 6 | Annotation | Google | More links)
    Wheeler, M. (2003). Changes in the rules: Computers, dynamic systems, and Searle. In John M. Preston & Michael A. Bishop (eds.), Views Into the Chinese Room: New Essays on Searle and Artificial Intelligence. Oxford University Press.   (Google)
    Whitmer, J. M. (1983). Intentionality, artificial intelligence, and the causal powers of the brain. Auslegung 10:194-210.   (Annotation | Google)

    6.1d Machine Consciousness

    Tson, M. E. (ms). A Brief Explanation of Consciousness.   (Google)
    Abstract: This short paper (4 pages) demonstrates how subjective experience, language, and consciousness can be explained in terms of abilities we share with the simplest of creatures, specifically the ability to detect, react to, and associate various aspects of the world.
    Adams, William Y. (online). Intersubjective transparency and artificial consciousness.   (Google)
    Adams, William Y. (2004). Machine consciousness: Plausible idea or semantic distortion? Journal of Consciousness Studies 11 (9):46-56.   (Cited by 1 | Google)
    Aleksander, Igor L. & Dunmall, B. (2003). Axioms and tests for the presence of minimal consciousness in agents I: Preamble. Journal of Consciousness Studies 10 (4):7-18.   (Cited by 13 | Google)
    Aleksander, Igor L. (2007). Machine consciousness. In Max Velmans & Susan Schneider (eds.), The Blackwell Companion to Consciousness. Blackwell.   (Cited by 1 | Google | More links)
    Aleksander, Igor L. (2006). Machine consciousness. In Steven Laureys (ed.), Boundaries of Consciousness. Elsevier.   (Cited by 1 | Google | More links)
    Amoroso, Richard L. (1997). The theoretical foundations for engineering a conscious quantum computer. In M. Gams, M. Paprzycki & X. Wu (eds.), Mind Versus Computer: Were Dreyfus and Winograd Right? Amsterdam: IOS Press.   (Cited by 5 | Google | More links)
    Angel, Leonard (1994). Am I a computer? In Eric Dietrich (ed.), Thinking Computers and Virtual Persons. Academic Press.   (Google)
    Angel, Leonard (1989). How to Build a Conscious Machine. Westview Press.   (Cited by 3 | Google)
    Arrabales, R. & Sanchis, A. (forthcoming). Applying machine consciousness models in autonomous situated agents. Pattern Recognition Letters.   (Google)
    Arrington, Robert L. (1999). Machines, consciousness, and thought. Idealistic Studies 29 (3):231-243.   (Google)
    Arrabales, R.; Ledezma, A. & Sanchis, A. (online). Modelling consciousness for autonomous robot exploration. Lecture Notes in Computer Science.   (Google)
    Aydede, Murat & Guzeldere, Guven (2000). Consciousness, intentionality, and intelligence: Some foundational issues for artificial intelligence. Journal Of Experimental and Theoretical Artificial Intelligence 12 (3):263-277.   (Cited by 6 | Google | More links)
    Bair, Puran K. (1981). Computer metaphors for consciousness. In The Metaphors Of Consciousness. New York: Plenum Press.   (Google)
    Barnes, E. (1991). The causal history of computational activity: Maudlin and olympia. Journal of Philosophy 88 (6):304-16.   (Cited by 5 | Annotation | Google | More links)
    Bell, John L. (online). Algorithmicity and consciousness.   (Google)
    Abstract: Why should one believe that conscious awareness is solely the result of organizational complexity? What is the connection between consciousness and combinatorics: transformation of quantity into quality? The claim that the former is reducible to the other seems unconvincing—as unlike as chalk and cheese! In his book1 Penrose is at least attempting to compare like with like: the enigma of consciousness with the progress of physics
    Birnbacher, Dieter (1995). Artificial consciousness. In Thomas Metzinger (ed.), Conscious Experience. Ferdinand Schoningh.   (Google)
    Bonzon, Pierre (2003). Conscious Behavior through Reflexive Dialogs. In A. Günter, R. Kruse & B. Neumann (eds.), Lectures Notes in Artificial Intelligence. Springer.   (Google)
    Abstract: We consider the problem of executing conscious behavior i.e., of driving an agent’s actions and of allowing it, at the same time, to run concurrent processes reflecting on these actions. Toward this end, we express a single agent’s plans as reflexive dialogs in a multi-agent system defined by a virtual machine. We extend this machine’s planning language by introducing two specific operators for reflexive dialogs i.e., conscious and caught for monitoring beliefs and actions, respectively. The possibility to use the same language both to drive a machine and to establish a reflexive communication within the machine itself stands as a key feature of our model.
    Bringsjord, Selmer (1994). Could, how could we tell if, and should - androids have inner lives? In Kenneth M. Ford, C. Glymour & Patrick Hayes (eds.), Android Epistemology. MIT Press.   (Cited by 16 | Google)
    Bringsjord, Selmer (2004). On building robot persons: Response to Zlatev. Minds and Machines 14 (3):381-385.   (Google | More links)
    Abstract:   Zlatev offers surprisingly weak reasoning in support of his view that robots with the right kind of developmental histories can have meaning. We ought nonetheless to praise Zlatev for an impressionistic account of how attending to the psychology of human development can help us build robots that appear to have intentionality
    Bringsjord, Selmer (2007). Offer: One billion dollars for a conscious robot; if you're honest, you must decline. Journal of Consciousness Studies 14 (7):28-43.   (Cited by 1 | Google | More links)
    Abstract: You are offered one billion dollars to 'simply' produce a proof-of-concept robot that has phenomenal consciousness -- in fact, you can receive a deliciously large portion of the money up front, by simply starting a three-year work plan in good faith. Should you take the money and commence? No. I explain why this refusal is in order, now and into the foreseeable future
    Bringsjord, Selmer (1992). What Robots Can and Can't Be. Kluwer.   (Cited by 85 | Google | More links)
    Brockmeier, Scott (1997). Computational architecture and the creation of consciousness. The Dualist 4.   (Cited by 2 | Google)
    Brown, Geoffrey (1989). Minds, Brains And Machines. St Martin's Press.   (Cited by 1 | Google)
    Buttazzo, G. (2001). Artificial consciousness: Utopia or real possibility? Computer 34:24-30.   (Cited by 17 | Google | More links)
    Caplain, G. (1995). Is consciousness a computational property? Informatica 19:615-19.   (Cited by 2 | Google | More links)
    Caws, Peter (1988). Subjectivity in the machine. Journal for the Theory of Social Behaviour 18 (September):291-308.   (Google | More links)
    Chandler, Keith A. (2002). Artificial intelligence and artificial consciousness. Philosophia 31 (1):32-46.   (Google)
    Chella, Antonio & Manzotti, Riccardo (2007). Artificial Consciousness. Imprint Academic.   (Cited by 1 | Google)
    Cherry, Christopher (1989). Reply--the possibility of computers becoming persons: A response to Dolby. Social Epistemology 3 (4):337-348.   (Google)
    Clack, Robert J. (1968). The myth of the conscious robot. Personalist 49:351-369.   (Google)
    Coles, L. S. (1993). Engineering machine consciousness. AI Expert 8:34-41.   (Google)
    Cotterill, Rodney M. J. (2003). Cyberchild: A simulation test-bed for consciousness studies. Journal of Consciousness Studies 10 (4):31-45.   (Cited by 5 | Google)
    Danto, Arthur C. (1960). On consciousness in machines. In Sidney Hook (ed.), Dimensions of Mind. New York University Press.   (Cited by 5 | Google)
    D'Aquili, Eugene G. & Newberg, Andrew B. (1996). Consciousness and the machine. Zygon 31 (2):235-52.   (Cited by 4 | Google)
    Dennett, Daniel C. (1997). Consciousness in Human and Robot Minds. In M. Ito, Y. Miyashita & Edmund T. Rolls (eds.), Cognition, Computation and Consciousness. Oxford University Press.   (Cited by 12 | Google | More links)
    Abstract: The best reason for believing that robots might some day become conscious is that we human beings are conscious, and we are a sort of robot ourselves. That is, we are extraordinarily complex self-controlling, self-sustaining physical mechanisms, designed over the eons by natural selection, and operating according to the same well-understood principles that govern all the other physical processes in living things: digestive and metabolic processes, self-repair and reproductive processes, for instance. It may be wildly over-ambitious to suppose that human artificers can repeat Nature's triumph, with variations in material, form, and design process, but this is not a deep objection. It is not as if a conscious machine contradicted any fundamental laws of nature, the way a perpetual motion machine does. Still, many skeptics believe--or in any event want to believe--that it will never be done. I wouldn't wager against them, but my reasons for skepticism are mundane, economic reasons, not theoretical reasons
    Dennett, Daniel C. (1994). The practical requirements for making a conscious robot. Philosophical Transactions of the Royal Society 349:133-46.   (Cited by 25 | Google | More links)
    Abstract: Arguments about whether a robot could ever be conscious have been conducted up to now in the factually impoverished arena of what is possible "in principle." A team at MIT of which I am a part is now embarking on a longterm project to design and build a humanoid robot, Cog, whose cognitive talents will include speech, eye-coordinated manipulation of objects, and a host of self-protective, self-regulatory and self-exploring activities. The aim of the project is not to make a conscious robot, but to make a robot that can interact with human beings in a robust and versatile manner in real time, take care of itself, and tell its designers things about itself that would otherwise be extremely difficult if not impossible to determine by examination. Many of the details of Cog's "neural" organization will parallel what is known (or presumed known) about their counterparts in the human brain, but the intended realism of Cog as a model is relatively coarse-grained, varying opportunistically as a function of what we think we know, what we think we can build, and what we think doesn't matter. Much of what we think will of course prove to be mistaken; that is one advantage of real experiments over thought experiments
    Duch, Włodzisław (2005). Brain-inspired conscious computing architecture. Journal of Mind and Behavior 26 (1-2):1-21.   (Cited by 8 | Google | More links)
    Abstract: What type of artificial systems will claim to be conscious and will claim to experience qualia? The ability to comment upon physical states of a brain-like dynamical system coupled with its environment seems to be sufficient to make claims. The flow of internal states in such system, guided and limited by associative memory, is similar to the stream of consciousness. Minimal requirements for an artificial system that will claim to be conscious were given in form of specific architecture named articon. Nonverbal discrimination of the working memory states of the articon gives it the ability to experience different qualities of internal states. Analysis of the inner state flows of such a system during typical behavioral process shows that qualia are inseparable from perception and action. The role of consciousness in learning of skills, when conscious information processing is replaced by subconscious, is elucidated. Arguments confirming that phenomenal experience is a result of cognitive processes are presented. Possible philosophical objections based on the Chinese room and other arguments are discussed, but they are insufficient to refute claims articon’s claims. Conditions for genuine understanding that go beyond the Turing test are presented. Articons may fulfill such conditions and in principle the structure of their experiences may be arbitrarily close to human
    Ettinger, R. C. W. (2004). To be or not to be: The zombie in the computer. In Nick Bostrom, R.C.W. Ettinger & Charles Tandy (eds.), Death and Anti-Death, Volume 2: Two Hundred Years After Kant, Fifty Years After Turing. Palo Alto: Ria University Press.   (Google)
    Farrell, B. A. (1970). On the design of a conscious device. Mind 79 (July):321-346.   (Cited by 1 | Google | More links)
    Farleigh, Peter (2007). The ensemble and the single mind. In Antonio Chella & Riccardo Manzotti (eds.), Artificial Consciousness. Imprint Academic.   (Google)
    Franklin, Stan (2003). A conscious artifact? Journal of Consciousness Studies 10.   (Google)
    Franklin, Stan (2003). Ida: A conscious artifact? Journal of Consciousness Studies 10 (4):47-66.   (Cited by 40 | Google)
    Gunderson, Keith (1969). Cybernetics and mind-body problems. Inquiry 12 (1-4):406-19.   (Google)
    Gunderson, Keith (1971). Mentality and Machines. Doubleday.   (Cited by 29 | Google)
    Gunderson, Keith (1968). Robots, consciousness and programmed behaviour. British Journal for the Philosophy of Science 19 (August):109-22.   (Google | More links)
    Haikonen, Pentti O. A. (2007). Essential issues of conscious machines. Journal of Consciousness Studies 14 (7):72-84.   (Google | More links)
    Abstract: The development of conscious machines faces a number of difficult issues such as the apparent immateriality of mind, qualia and self-awareness. Also consciousness-related cognitive processes such as perception, imagination, motivation and inner speech are a technical challenge. It is foreseen that the development of machine consciousness would call for a system approach; the developer of conscious machines should consider complete systems that integrate the cognitive processes seamlessly and process information in a transparent way with representational and non-representational information-processing modes. An overview of the main issues is given and some possible solutions are outlined
    Haikonen, & Pentti, O. (2007). Robot Brains: Circuits and Systems for Conscious Machines. Wiley-Interscience.   (Google | More links)
    Haikonen, Pentti O. A. (2003). The Cognitive Approach to Conscious Machines. Thorverton UK: Imprint Academic.   (Cited by 20 | Google | More links)
    Harnad, Stevan (2003). Can a machine be conscious? How? Journal of Consciousness Studies 10 (4):67-75.   (Cited by 16 | Google | More links)
    Abstract: A "machine" is any causal physical system, hence we are machines, hence machines can be conscious. The question is: which kinds of machines can be conscious? Chances are that robots that can pass the Turing Test -- completely indistinguishable from us in their behavioral capacities -- can be conscious (i.e. feel), but we can never be sure (because of the "other-minds" problem). And we can never know HOW they have minds, because of the "mind/body" problem. We can only know how they pass the Turing Test, but not how, why or whether that makes them feel
    Henley, Tracy B. (1991). Consciousness and aI: A reconsideration of Shanon. Journal of Mind and Behavior 12 (3):367-370.   (Google)
    Hillis, D. (1998). Can a machine be conscious? In Stuart R. Hameroff, Alfred W. Kaszniak & A. C. Scott (eds.), Toward a Science of Consciousness II. MIT Press.   (Google)
    Holland, Owen (2007). A strongly embodied approach to machine consciousness. Journal of Consciousness Studies 14 (7):97-110.   (Cited by 4 | Google | More links)
    Abstract: Over sixty years ago, Kenneth Craik noted that, if an organism (or an artificial agent) carried 'a small-scale model of external reality and of its own possible actions within its head', it could use the model to behave intelligently. This paper argues that the possible actions might best be represented by interactions between a model of reality and a model of the agent, and that, in such an arrangement, the internal model of the agent might be a transparent model of the sort recently discussed by Metzinger, and so might offer a useful analogue of a conscious entity. The CRONOS project has built a robot functionally similar to a human that has been provided with an internal model of itself and of the world to be used in the way suggested by Craik; when the system is completed, it will be possible to study its operation from the perspective not only of artificial intelligence, but also of machine consciousness
    Holland, Owen (ed.) (2003). Machine Consciousness. Imprint Academic.   (Cited by 19 | Google | More links)
    Abstract: In this collection of essays we hear from an international array of computer and brain scientists who are actively working from both the machine and human ends...
    Holland, Owen & Goodman, Russell B. (2003). Robots with internal models: A route to machine consciousness? Journal of Consciousness Studies 10 (4):77-109.   (Cited by 20 | Google | More links)
    Holland, Owen; Knight, Rob & Newcombe, Richard (2007). The role of the self process in embodied machine consciousness. In Antonio Chella & Riccardo Manzotti (eds.), Artificial Consciousness. Imprint Academic.   (Google)
    Joy, Glenn C. (1989). Gunderson and Searle: A common error about artificial intelligence. Southwest Philosophical Studies 28:28-34.   (Google)
    Kirk, Robert E. (1986). Sentience, causation and some robots. Australasian Journal of Philosophy 64 (September):308-21.   (Cited by 1 | Annotation | Google | More links)
    Kiverstein, Julian (2007). Could a robot have a subjective point of view? Journal of Consciousness Studies 14 (7):127-139.   (Cited by 2 | Google | More links)
    Abstract: Scepticism about the possibility of machine consciousness comes in at least two forms. Some argue that our neurobiology is special, and only something sharing our neurobiology could be a subject of experience. Others argue that a machine couldn't be anything else but a zombie: there could never be something it is like to be a machine. I advance a dynamic sensorimotor account of consciousness which argues against both these varieties of scepticism
    Levy, Donald (2003). How to psychoanalyze a robot: Unconscious cognition and the evolution of intentionality. Minds and Machines 13 (2):203-212.   (Google | More links)
    Abstract:   According to a common philosophical distinction, the `original' intentionality, or `aboutness' possessed by our thoughts, beliefs and desires, is categorically different from the `derived' intentionality manifested in some of our artifacts –- our words, books and pictures, for example. Those making the distinction claim that the intentionality of our artifacts is `parasitic' on the `genuine' intentionality to be found in members of the former class of things. In Kinds of Minds: Toward an Understanding of Consciousness, Daniel Dennett criticizes that claim and the distinction it rests on, and seeks to show that ``metaphysically original intentionality'' is illusory by working out the implications he sees in the practical possibility of a certain type of robot, i.e., one that generates `utterances' which are `inscrutable to the robot's designers' so that we, and they, must consult the robot to discover the meaning of its utterances. I argue that the implications Dennett finds are erroneous, regardless of whether such a robot is possible, and therefore that the real existence of metaphysically original intentionality has not been undermined by the possibility of the robot Dennett describes
    Lucas, John R. (1994). A view of one's own (conscious machines). Philosophical Transactions of the Royal Society, Series A 349:147-52.   (Google)
    Lycan, William G. (1998). Qualitative experience in machines. In Terrell Ward Bynum & James H. Moor (eds.), How Computers Are Changing Philosophy. Blackwell.   (Google)
    Mackay, Donald M. (1963). Consciousness and mechanism: A reply to miss Fozzy. British Journal for the Philosophy of Science 14 (August):157-159.   (Google | More links)
    Mackay, Donald M. (1985). Machines, brains, and persons. Zygon 20 (December):401-412.   (Google)
    Manzotti, Riccardo (2007). From artificial intelligence to artificial consciousness. In Antonio Chella & Riccardo Manzotti (eds.), Artificial Consciousness. Imprint Academic.   (Google)
    Margolis, Joseph (1974). Ascribing actions to machines. Behaviorism 2:85-93.   (Google)
    Marras, Ausonio (1993). Pollock on how to build a person. Dialogue 32 (3):595-605.   (Cited by 1 | Google)
    Maudlin, Tim (1989). Computation and consciousness. Journal of Philosophy 86 (August):407-32.   (Cited by 24 | Annotation | Google | More links)
    Mayberry, Thomas C. (1970). Consciousness and robots. Personalist 51:222-236.   (Google)
    McCann, Hugh J. (2005). Intentional action and intending: Recent empirical studies. Philosophical Psychology 18 (6):737-748.   (Cited by 19 | Google | More links)
    Abstract: Recent empirical work calls into question the so-called Simple View that an agent who A’s intentionally intends to A. In experimental studies, ordinary speakers frequently assent to claims that, in certain cases, agents who knowingly behave wrongly intentionally bring about the harm they do; yet the speakers tend to deny that it was the intention of those agents to cause the harm. This paper reports two additional studies that at first appear to support the original ones, but argues that in fact, the evidence of all the studies considered is best understood in terms of the Simple View.
    McCarthy, John (1996). Making robots conscious of their mental states. In S. Muggleton (ed.), Machine Intelligence 15. Oxford University Press.   (Cited by 68 | Google | More links)
    Abstract: In AI, consciousness of self consists in a program having certain kinds of facts about its own mental processes and state of mind. We discuss what consciousness of its own mental structures a robot will need in order to operate in the common sense world and accomplish the tasks humans will give it. It's quite a lot. Many features of human consciousness will be wanted, some will not, and some abilities not possessed by humans have already been found feasible and useful in limited contexts. We give preliminary fragments of a logical language a robot can use to represent information about its own state of mind. A robot will often have to conclude that it cannot decide a question on the basis of the information in memory and therefore must seek information externally. Gödel's idea of relative consistency is used to formalize non-knowledge. Programs with the kind of consciousness discussed in this article do not yet exist, although programs with some components of it exist. Thinking about consciousness with a view to designing it provides a new approach to some of the problems of consciousness studied by philosophers. One advantage is that it focusses on the aspects of consciousness important for intelligent behavior
    McDermott, Drew (2007). Artificial intelligence and consciousness. In Philip David Zelazo, Morris Moscovitch & Evan Thompson (eds.), The Cambridge Handbook of Consciousness. Cambridge.   (Google)
    McGinn, Colin (1987). Could a machine be conscious? In Colin Blakemore & Susan A. Greenfield (eds.), Mindwaves. Blackwell.   (Cited by 1 | Annotation | Google)
    Mele, Alfred R. (2006). The folk concept of intentional action: A commentary. Journal of Cognition and Culture.   (Cited by 1 | Google | More links)
    Abstract: In this commentary, I discuss the three main articles in this volume that present survey data relevant to a search for something that might merit the label “the folk concept of intentional action” – the articles by Joshua Knobe and Arudra Burra, Bertram Malle, and Thomas Nadelhoffer. My guiding question is this: What shape might we find in an analysis of intentional action that takes at face value the results of all of the relevant surveys about vignettes discussed in these three articles?1 To simplify exposition, I assume that there is something that merits the label I mentioned
    Menant, Christophe, Proposal for an approach to artificial consciousness based on self-consciousness.   (Google | More links)
    Abstract: Current research on artificial consciousness is focused on phenomenal consciousness and on functional consciousness. We propose to shift the focus to self-consciousness in order to open new areas of investigation. We use an existing scenario where self-consciousness is considered as the result of an evolution of representations. Application of the scenario to the possible build up of a conscious robot also introduces questions relative to emotions in robots. Areas of investigation are proposed as a continuation of this approach
    Minsky, Marvin L. (1991). Conscious machines. In Machinery of Consciousness.   (Google)
    Moffett, Marc A. & Cole Wright, Jennifer (online). The folk on know-how: Why radical intellectualism does not over-intellectualize.   (Google)
    Abstract: Philosophical discussion of the nature of know-how has focused on the relation between know-how and ability. Broadly speaking, neo-Ryleans attempt to identify know-how with a certain type of ability,1 while, traditionally, intellectualists attempt to reduce it to some form of propositional knowledge. For our purposes, however, this characterization of the debate is too crude. Instead, we prefer the following more explicit taxonomy. Anti-intellectualists, as we will use the term, maintain that knowing how to ? entails the ability to ?. Dispositionalists maintain that the ability to ? is sufficient (modulo some fairly innocuous constraints) for knowing how to ?. Intellectualists, as we will use the term, deny the anti-intellectualist claim. Finally, radical intellectualists deny both the anti-intellectualist and dispositionalist claims. Pace neo-Ryleans (who in our taxonomy are those who accept both dispositionalism and anti-intellectualism), radical intellectualists maintain that the ability to ? is neither necessary nor sufficient for knowing how to ?
    Nichols, Shaun (2004). Folk concepts and intuitions: From philosophy to cognitive science. Trends in Cognitive Sciences.   (Cited by 10 | Google | More links)
    Abstract: Analytic philosophers have long used a priori methods to characterize folk concepts like knowledge, belief, and wrongness. Recently, researchers have begun to exploit social scientific methodologies to characterize such folk concepts. One line of work has explored folk intuitions on cases that are disputed within philosophy. A second approach, with potentially more radical implications, applies the methods of cross-cultural psychology to philosophical intuitions. Recent work suggests that people in different cultures have systematically different intuitions surrounding folk concepts like wrong, knows, and refers. A third strand of research explores the emergence and character of folk concepts in children. These approaches to characterizing folk concepts provide important resources that will supplement, and perhaps sometimes displace, a priori approaches
    Pinker, Steven (online). Could a computer ever be conscious?   (Google)
    Prinz, Jesse J. (2003). Level-headed mysterianism and artificial experience. Journal of Consciousness Studies 10 (4-5):111-132.   (Cited by 8 | Google)
    Puccetti, Roland (1975). God and the robots: A philosophical fable. Personalist 56:29-30.   (Google)
    Puccetti, Roland (1967). On thinking machines and feeling machines. British Journal for the Philosophy of Science 18 (May):39-51.   (Cited by 3 | Annotation | Google | More links)
    Putnam, Hilary (1964). Robots: Machines or artificially created life? Journal of Philosophy 61 (November):668-91.   (Annotation | Google)
    Rhodes, Kris (ms). Vindication of the Rights of Machine.   (Google | More links)
    Abstract: In this paper, I argue that certain Machines can have rights independently of whether they are sentient, or conscious, or whatever you might call it.
    Robinson, William S. (1998). Could a robot be qualitatively conscious? Aisb 99:13-18.   (Google)
    Sanz, Ricardo; López, Ignacio & Bermejo-Alonso, Julita (2007). A rationale and vision for machine consciousness in complex controllers. In Antonio Chella & Riccardo Manzotti (eds.), Artificial Consciousness. Imprint Academic.   (Google)
    Schlagel, Richard H. (1999). Why not artificial consciousness or thought? Minds and Machines 9 (1):3-28.   (Cited by 6 | Google | More links)
    Abstract:   The purpose of this article is to show why consciousness and thought are not manifested in digital computers. Analyzing the rationale for claiming that the formal manipulation of physical symbols in Turing machines would emulate human thought, the article attempts to show why this proved false. This is because the reinterpretation of designation and meaning to accommodate physical symbol manipulation eliminated their crucial functions in human discourse. Words have denotations and intensional meanings because the brain transforms the physical stimuli received from the microworld into a qualitative, macroscopic representation for consciousness. Lacking this capacity as programmed machines, computers have no representations for their symbols to designate and mean. Unlike human beings in which consciousness and thought, with their inherent content, have emerged because of their organic natures, serial processing computers or parallel distributed processing systems, as programmed electrical machines, lack these causal capacities
    Scriven, Michael (1953). The mechanical concept of mind. Mind 62 (April):230-240.   (Cited by 12 | Annotation | Google | More links)
    Shanon, Benny (1991). Consciousness and the computer: A reply to Henley. Journal of Mind and Behavior 12 (3):371-375.   (Google)
    Sharlow, Mark F. (ms). Can machines have first-person properties?   (Google)
    Abstract: One of the most important ongoing debates in the philosophy of mind is the debate over the reality of the first-person character of consciousness.[1] Philosophers on one side of this debate hold that some features of experience are accessible only from a first-person standpoint. Some members of this camp, notably Frank Jackson, have maintained that epiphenomenal properties play roles in consciousness [2]; others, notably John R. Searle, have rejected dualism and regarded mental phenomena as entirely biological.[3] In the opposite camp are philosophers who hold that all mental capacities are in some sense computational - or, more broadly, explainable in terms of features of information processing systems.[4] Consistent with this explanatory agenda, members of this camp normally deny that any aspect of mind is accessible solely from a first-person standpoint. This denial sometimes goes very far - even as far as Dennett's claim that the phenomenology of conscious experience does not really exist
    Simon, Michael A. (1969). Could there be a conscious automaton? American Philosophical Quarterly 6 (January):71-78.   (Google)
    Sloman, Aaron & Chrisley, Ronald L. (2003). Virtual machines and consciousness. Journal of Consciousness Studies 10 (4-5):133-172.   (Cited by 26 | Google | More links)
    Smart, J. J. C. (1959). Professor Ziff on robots. Analysis 19 (April):117-118.   (Cited by 3 | Google)
    Smart, Ninian (1959). Robots incorporated. Analysis 19 (April):119-120.   (Cited by 3 | Google)
    Stuart, Susan A. J. (2007). Machine consciousness: Cognitive and kinaesthetic imagination. Journal of Consciousness Studies 14 (7):141-153.   (Cited by 1 | Google | More links)
    Abstract: Machine consciousness exists already in organic systems and it is only a matter of time -- and some agreement -- before it will be realised in reverse-engineered organic systems and forward- engineered inorganic systems. The agreement must be over the preconditions that must first be met if the enterprise is to be successful, and it is these preconditions, for instance, being a socially-embedded, structurally-coupled and dynamic, goal-directed entity that organises its perceptual input and enacts its world through the application of both a cognitive and kinaesthetic imagination, that I shall concentrate on presenting in this paper. It will become clear that these preconditions will present engineers with a tall order, but not, I will argue, an impossible one. After all, we might agree with Freeman and Núñez's claim that the machine metaphor has restricted the expectations of the cognitive sciences (Freeman & Núñez, 1999); but it is a double-edged sword, since our limited expectations about machines also narrow the potential of our cognitive science
    Stubenberg, Leopold (1992). What is it like to be Oscar? Synthese 90 (1):1-26.   (Cited by 1 | Annotation | Google | More links)
    Abstract:   Oscar is going to be the first artificial person — at any rate, he is going to be the first artificial person to be built in Tucson's Philosophy Department. Oscar's creator, John Pollock, maintains that once Oscar is complete he will experience qualia, will be self-conscious, will have desires, fears, intentions, and a full range of mental states (Pollock 1989, pp. ix–x). In this paper I focus on what seems to me to be the most problematical of these claims, viz., that Oscar will experience qualia. I argue that we have not been given sufficient reasons to believe this bold claim. I doubt that Oscar will enjoy qualitative conscious phenomena and I maintain that it will be like nothing to be Oscar
    Tagliasco, Vincenzo (2007). Artificial consciousness: A technological discipline. In Antonio Chella & Riccardo Manzotti (eds.), Artificial Consciousness. Imprint Academic.   (Google)
    Taylor, John G. (2007). Through machine attention to machine consciousness. In Antonio Chella & Riccardo Manzotti (eds.), Artificial Consciousness. Imprint Academic.   (Google)
    Thompson, David L. (1965). Can a machine be conscious? British Journal for the Philosophy of Science 16 (May):33-43.   (Annotation | Google | More links)
    Thompson, William I. (2003). The Borg or Borges? In Owen Holland (ed.), Machine Consciousness. Imprint Academic.   (Cited by 2 | Google | More links)
    Torrance, Steve (2007). Two conceptions of machine phenomenality. Journal of Consciousness Studies 14 (7):154-166.   (Cited by 2 | Google | More links)
    Abstract: Current approaches to machine consciousness (MC) tend to offer a range of characteristic responses to critics of the enterprise. Many of these responses seem to marginalize phenomenal consciousness, by presupposing a 'thin' conception of phenomenality. This conception is, we will argue, largely shared by anti- computationalist critics of MC. On the thin conception, physiological or neural or functional or organizational features are secondary accompaniments to consciousness rather than primary components of consciousness itself. We outline an alternative, 'thick' conception of phenomenality. This shows some signposts in the direction of a more adequate approach to MC
    Tson, M. E. (ms). From Dust to Descartes: A Mechanical and Evolutionary Explanation of Consciousness and Self-Awareness.   (Google)
    Abstract: Beginning with physical reactions as simple and mechanical as rust, From Dust to Descartes goes step by evolutionary step to explore how the most remarkable and personal aspects of consciousness have arisen, how our awareness of the world of ourselves differs from that of other species, and whether machines could ever become self-aware. Part I addresses a newborn’s innate abilities. Part II shows how with these and experience, we can form expectations about the world. Parts III concentrates on the essential role that others play in the formation of self-awareness. Part IV then explores what follows from this explanation of human consciousness, touching on topics such as free will, personality, intelligence, and color perception which are often associated with self-awareness and the philosophy of mind.
    van de Vete, D. (1971). The problem of robot consciousness. Philosophy and Phenomenological Research 32:149-65.   (Google)
    Wallace, Rodrick (2006). Pitfalls in biological computing: Canonical and idiosyncratic dysfunction of conscious machines. Mind and Matter 4 (1):91-113.   (Cited by 7 | Google | More links)
    Abstract: The central paradigm of arti?cial intelligence is rapidly shifting toward biological models for both robotic devices and systems performing such critical tasks as network management, vehicle navigation, and process control. Here we use a recent mathematical analysis of the necessary conditions for consciousness in humans to explore likely failure modes inherent to a broad class of biologically inspired computing machines. Analogs to developmental psychopathology, in which regulatory mechanisms for consciousness fail progressively and subtly understress, and toinattentional blindness, where a narrow 'syntactic band pass' de?ned by the rate distortion manifold of conscious attention results in pathological ?xation, seem inevitable. Similar problems are likely to confront other possible architectures, although their mathematical description may be far less straightforward. Computing devices constructed on biological paradigms will inevitably lack the elaborate, but poorly understood, system of control mechanisms which has evolved over the last few hundred million years to stabilize consciousness in higher animals. This will make such machines prone to insidious degradation, and, ultimately, catastrophic failure
    Ziff, P. (1959). The feelings of robots. Analysis 19 (January):64-68.   (Cited by 11 | Annotation | Google)

    6.1e Machine Mentality, Misc

    Albritton, Rogers (1964). Comments on Hilary Putnam's robots: Machines or artificially created life. Journal of Philosophy 61 (November):691-694.   (Google)
    Ashby, W. R. (1947). The nervous system as physical machine: With special reference to the origin of adaptive behaviour. Mind 56 (January):44-59.   (Cited by 8 | Google | More links)
    Beisecker, David (2006). Dennett's overlooked originality. Minds and Machines 16 (1):43-55.   (Google | More links)
    Abstract: No philosopher has worked harder than Dan Dennett to set the possibility of machine mentality on firm philosophical footing. Dennett’s defense of this possibility has both a positive and a negative thrust. On the positive side, he has developed an account of mental activity that is tailor-made for the attribution of intentional states to purely mechanical contrivances, while on the negative side, he pillories as mystery mongering and skyhook grasping any attempts to erect barriers to the conception of machine mentality by excavating gulfs to keep us “bona fide” thinkers apart from the rest of creation. While I think he’s “won” the rhetorical tilts with his philosophical adversaries, I worry that Dennett’s negative side sometimes gets the better of him, and that this obscures advances that can be made on the positive side of his program. In this paper, I show that Dennett is much too dismissive of original intentionality in particular, and that this notion can be put to good theoretical use after all. Though deployed to distinguish different grades of mentality, it can (and should) be incorporated into a philosophical account of the mind that is recognizably Dennettian in spirit
    Beloff, John (2002). Minds or machines. Truth Journal.   (Cited by 2 | Google)
    Boden, Margaret A. (1995). Could a robot be creative--and would we know? In Android Epistemology. Cambridge: MIT Press.   (Cited by 6 | Google | More links)
    Boden, Margaret A. (1969). Machine perception. Philosophical Quarterly 19 (January):33-45.   (Cited by 2 | Google | More links)
    Bostrom, Nick (2003). Taking intelligent machines seriously: Reply to critics. Futures 35 (8):901-906.   (Google | More links)
    Abstract: In an earlier paper in this journal[1], I sought to defend the claims that (1) substantial probability should be assigned to the hypothesis that machines will outsmart humans within 50 years, (2) such an event would have immense ramifications for many important areas of human concern, and that consequently (3) serious attention should be given to this scenario. Here, I will address a number of points made by several commentators
    Brey, Philip (2001). Hubert Dreyfus: Humans versus computers. In American Philosophy of Technology: The Empirical Turn. Bloomington: Indiana University Press.   (Cited by 2 | Google)
    Bringsjord, Selmer (1998). Cognition is not computation: The argument from irreversibility. Synthese 113 (2):285-320.   (Cited by 11 | Google | More links)
    Abstract:   The dominant scientific and philosophical view of the mind – according to which, put starkly, cognition is computation – is refuted herein, via specification and defense of the following new argument: Computation is reversible; cognition isn't; ergo, cognition isn't computation. After presenting a sustained dialectic arising from this defense, we conclude with a brief preview of the view we would put in place of the cognition-is-computation doctrine
    Bringsjord, Selmer (1994). Precis of What Robots Can and Can't Be. Psycholoquy 5 (59).   (Cited by 22 | Google)
    Bunge, Mario (1956). Do computers think? (I). British Journal for the Philosophy of Science 7 (26):139-148.   (Cited by 1 | Google | More links)
    Bunge, Mario (1956). Do computers think? (II). British Journal for the Philosophy of Science 7 (27):212-219.   (Google | More links)
    Burks, Arthur W. (1973). Logic, computers, and men. Proceedings and Addresses of the American Philosophical Association 46:39-57.   (Cited by 4 | Annotation | Google)
    Campbell, Richmond M. & Rosenberg, Alexander (1973). Action, purpose, and consciousness among the computers. Philosophy of Science 40 (December):547-557.   (Google | More links)
    Casey, Gerard (1992). Minds and machines. American Catholic Philosophical Quarterly 66 (1):57-80.   (Cited by 3 | Google)
    Abstract: The emergence of electronic computers in the last thirty years has given rise to many interesting questions. Many of these questions are technical, relating to a machine’s ability to perform complex operations in a variety of circumstances. While some of these questions are not without philosophical interest, the one question which above all others has stimulated philosophical interest is explicitly non-technical and it can be expressed crudely as follows: Can a machine be said to think and, if so, in what sense? The issue has received much attention in the scholarly journals with articles and arguments appearing in great profusion, some resolutely answering this question in the affirmative, some, equally resolutely, answering this question in the negative, and others manifesting modified rapture. While the ramifications of the question are enormous I believe that the issue at the heart of the matter has gradually emerged from the forest of complications
    Cherry, Christopher (1991). Machines as persons? - I. In Human Beings. New York: Cambridge University Press.   (Google)
    Cohen, L. Jonathan (1955). Can there be artificial minds? Analysis 16 (December):36-41.   (Cited by 3 | Annotation | Google)
    Collins, Harry M. (2008). Response to Selinger on Dreyfus. Phenomenology and the Cognitive Sciences 7 (2).   (Google | More links)
    Abstract: My claim is clear and unambiguous: no machine will pass a well-designed Turing Test unless we find some means of embedding it in lived social life. We have no idea how to do this but my argument, and all our evidence, suggests that it will not be a necessary condition that the machine have more than a minimal body. Exactly how minimal is still being worked out
    Copeland, B. Jack (2000). Narrow versus wide mechanism: Including a re-examination of Turing's views on the mind-machine issue. Journal of Philosophy 97 (1):5-33.   (Cited by 42 | Google | More links)
    Dayre, Kenneth M. (1968). Intelligence, bodies, and digital computers. Review of Metaphysics 21 (June):714-723.   (Google)
    Dembski, William A. (1999). Are we spiritual machines? First Things 96:25-31.   (Google)
    Abstract: For two hundred years materialist philosophers have argued that man is some sort of machine. The claim began with French materialists of the Enlightenment such as Pierre Cabanis, Julien La Mettrie, and Baron d’Holbach (La Mettrie even wrote a book titled Man the Machine). Likewise contemporary materialists like Marvin Minsky, Daniel Dennett, and Patricia Churchland claim that the motions and modifications of matter are sufficient to account for all human experiences, even our interior and cognitive ones. Whereas the Enlightenment philosophes might have thought of humans in terms of gear mechanisms and fluid flows, contemporary materialists think of humans in terms of neurological systems and computational devices. The idiom has been updated, but the underlying impulse to reduce mind to matter remains unchanged
    Dennett, Daniel C. (1984). Can machines think? In M. G. Shafto (ed.), How We Know. Harper & Row.   (Cited by 24 | Annotation | Google)
    Dennett, Daniel C. (1997). Did Hal committ murder? In D. Stork (ed.), Hal's Legacy: 2001's Computer As Dream and Reality. MIT Press.   (Google)
    Abstract: The first robot homicide was committed in 1981, according to my files. I have a yellowed clipping dated 12/9/81 from the Philadelphia Inquirer--not the National Enquirer--with the headline: Robot killed repairman, Japan reports The story was an anti-climax: at the Kawasaki Heavy Industries plant in Akashi, a malfunctioning robotic arm pushed a repairman against a gearwheel-milling machine, crushing him to death. The repairman had failed to follow proper instructions for shutting down the arm before entering the workspace. Why, indeed, had this industrial accident in Japan been reported in a Philadelphia newspaper? Every day somewhere in the world a human worker is killed by one machine or another. The difference, of course, was that in the public imagination at least, this was no ordinary machine; this was a robot, a machine that might have a mind, might have evil intentions, might be capable not just of homicide but of murder
    Dretske, Fred (1993). Can intelligence be artificial? Philosophical Studies 71 (2):201-16.   (Cited by 3 | Annotation | Google | More links)
    Dretske, Fred (1985). Machines and the mental. Proceedings and Addresses of the American Philosophical Association 59 (1):23-33.   (Cited by 27 | Annotation | Google)
    Drexler, Eric (1986). Thinking machines. In Engines of Creation. Fourth Estate.   (Cited by 1 | Google)
    Dreyfus, Hubert L. (1972). What Computers Can't Do. Harper and Row.   (Cited by 847 | Annotation | Google)
    Dreyfus, Hubert L. (1967). Why computers must have bodies in order to be intelligent. Review of Metaphysics 21 (September):13-32.   (Cited by 13 | Google)
    Drozdek, Adam (1993). Computers and the mind-body problem: On ontological and epistemological dualism. Idealistic Studies 23 (1):39-48.   (Google)
    Endicott, Ronald P. (1996). Searle, syntax, and observer-relativity. Canadian Journal of Philosophy 26 (1):101-22.   (Cited by 3 | Google)
    Abstract: I critically examine some provocative arguments that John Searle presents in his book The Rediscovery of Mind to support the claim that the syntactic states of a classical computational system are "observer relative" or "mind dependent" or otherwise less than fully and objectively real. I begin by explaining how this claim differs from Searle's earlier and more well-known claim that the physical states of a machine, including the syntactic states, are insufficient to determine its semantics. In contrast, his more recent claim concerns the syntax, in particular, whether a machine actually has symbols to underlie its semantics. I then present and respond to a number of arguments that Searle offers to support this claim, including whether machine symbols are observer relative because the assignment of syntax is arbitrary, or linked to universal realizability, or linked to the sub-personal interpretive acts of a homunculus, or linked to a person's consciousness. I conclude that a realist about the computational model need not be troubled by such arguments. Their key premises need further support.
    Fisher, Mark (1983). A note on free will and artificial intelligence. Philosophia 13 (September):75-80.   (Google | More links)
    Fozzy, P. J. (1963). Professor MacKay on machines. British Journal for the Philosophy of Science 14 (August):154-156.   (Google | More links)
    Friedland, Julian (2005). Wittgenstein and the aesthetic robot's handicap. Philosophical Investigations 28 (2):177-192.   (Google | More links)
    Fulton, James S. (1957). Computing machines and minds. Personalist 38:62-72.   (Google)
    Gaglio, Salvatore (2007). Intelligent artificial systems. In Antonio Chella & Riccardo Manzotti (eds.), Artificial Consciousness. Imprint Academic.   (Google)
    Gams, Matjaz (ed.) (1997). Mind Versus Computer: Were Dreyfus and Winograd Right? Amsterdam: IOS Press.   (Cited by 7 | Google | More links)
    Gauld, Alan (1966). Could a machine perceive? British Journal for the Philosophy of Science 17 (May):44-58.   (Cited by 3 | Google | More links)
    Gogol, Daniel (1970). Determinism and the predicting machine. Philosophy and Phenomenological Research 30 (March):455-456.   (Google | More links)
    Goldkind, Stuart (1982). Machines and mistakes. Ratio 24 (December):173-184.   (Cited by 1 | Google)
    Goldberg, Sanford C. (1997). The very idea of computer self-knowledge and self-deception. Minds and Machines 7 (4):515-529.   (Cited by 5 | Google | More links)
    Abstract:   Do computers have beliefs? I argue that anyone who answers in the affirmative holds a view that is incompatible with what I shall call the commonsense approach to the propositional attitudes. My claims shall be two. First,the commonsense view places important constraints on what can be acknowledged as a case of having a belief. Second, computers – at least those for which having a belief would be conceived as having a sentence in a belief box – fail to satisfy some of these constraints. This second claim can best be brought out in the context of an examination of the idea of computer self-knowledge and self-deception, but the conclusion is perfectly general: the idea that computers are believers, like the idea that computers could have self-knowledge or be self-deceived, is incompatible with the commonsense view. The significance of the argument lies in the choice it forces on us: whether to revise our notion of belief so as to accommodate the claim that computers are believers, or to give up on that claim so as to preserve our pretheoretic notion of the attitudes. We cannot have it both ways
    Gomila, Antoni (1995). From cognitive systems to persons. In Android Epistemology. Cambridge: MIT Press.   (Cited by 2 | Google)
    Gunderson, Keith (1963). Interview with a robot. Analysis 23 (June):136-142.   (Cited by 2 | Google)
    Gunderson, Keith (1985). Mentality And Machines, Second Edition. Minneapolis: University Minnesota Press.   (Google)
    Hauser, Larry (1993). The sense of thinking. Minds and Machines 3 (1):21-29.   (Cited by 3 | Google | More links)
    Abstract:   It will be found that the great majority, given the premiss that thought is not distinct from corporeal motion, take a much more rational line and maintain that thought is the same in the brutes as in us, since they observe all sorts of corporeal motions in them, just as in us. And they will add that the difference, which is merely one of degree, does not imply any essential difference; from this they will be quite justified in concluding that, although there may be a smaller degree of reason in the beasts than there is in us, the beasts possess minds which are of exactly the same type as ours. (Descartes 1642: 288–289.)
    Hauser, Larry (1993). Why isn't my pocket calculator a thinking thing? Minds and Machines 3 (1):3-10.   (Cited by 11 | Google | More links)
    Abstract: My pocket calculator (Cal) has certain arithmetical abilities: it seems Cal calculates. That calculating is thinking seems equally untendentious. Yet these two claims together provide premises for a seemingly valid syllogism whose conclusion -- Cal thinks -- most would deny. I consider several ways to avoid this conclusion, and find them mostly wanting. Either we ourselves can't be said to think or calculate if our calculation-like performances are judged by the standards proposed to rule out Cal; or the standards -- e.g., autonomy and self-consciousness -- make it impossible to verify whether anything or anyone (save myself) meets them. While appeals to the intentionality of thought or the unity of minds provide more credible lines of resistance, available accounts of intentionality and mental unity are insufficiently clear and warranted to provide very substantial arguments against Cal's title to be called a thinking thing. Indeed, considerations favoring granting that title are more formidable than generally appreciated
    Heffernan, James D. (1978). Some doubts about Turing machine arguments. Philosophy of Science 45 (December):638-647.   (Google | More links)
    Henley, Tracy B. (1990). Natural problems and artificial intelligence. Behavior and Philosophy 18:43-55.   (Cited by 4 | Annotation | Google)
    Joske, W. D. (1972). Deliberating machines. Philosophical Papers 1 (October):57-66.   (Google)
    Kary, Michael & Mahner, Martin (2002). How would you know if you synthesized a thinking thing? Minds and Machines 12 (1):61-86.   (Cited by 1 | Google | More links)
    Abstract:   We confront the following popular views: that mind or life are algorithms; that thinking, or more generally any process other than computation, is computation; that anything other than a working brain can have thoughts; that anything other than a biological organism can be alive; that form and function are independent of matter; that sufficiently accurate simulations are just as genuine as the real things they imitate; and that the Turing test is either a necessary or sufficient or scientific procedure for evaluating whether or not an entity is intelligent. Drawing on the distinction between activities and tasks, and the fundamental scientific principles of ontological lawfulness, epistemological realism, and methodological skepticism, we argue for traditional scientific materialism of the emergentist kind in opposition to the functionalism, behaviourism, tacit idealism, and merely decorative materialism of the artificial intelligence and artificial life communities
    Kearns, John T. (1997). Thinking machines: Some fundamental confusions. Minds and Machines 7 (2):269-87.   (Cited by 8 | Google | More links)
    Abstract:   This paper explores Church's Thesis and related claims madeby Turing. Church's Thesis concerns computable numerical functions, whileTuring's claims concern both procedures for manipulating uninterpreted marksand machines that generate the results that these procedures would yield. Itis argued that Turing's claims are true, and that they support (the truth of)Church's Thesis. It is further argued that the truth of Turing's and Church'sTheses has no interesting consequences for human cognition or cognitiveabilities. The Theses don't even mean that computers can do as much as peoplecan when it comes to carrying out effective procedures. For carrying out aprocedure is a purposive, intentional activity. No actual machine does, orcan do, as much
    Krishna, Daya (1961). "Lying" and the compleat robot. British Journal for the Philosophy of Science 12 (August):146-149.   (Cited by 1 | Google | More links)
    Kugel, Peter (2002). Computing machines can't be intelligent (...And Turing said so). Minds and Machines 12 (4):563-579.   (Cited by 4 | Google | More links)
    Abstract:   According to the conventional wisdom, Turing (1950) said that computing machines can be intelligent. I don''t believe it. I think that what Turing really said was that computing machines –- computers limited to computing –- can only fake intelligence. If we want computers to become genuinelyintelligent, we will have to give them enough initiative (Turing, 1948, p. 21) to do more than compute. In this paper, I want to try to develop this idea. I want to explain how giving computers more ``initiative'''' can allow them to do more than compute. And I want to say why I believe (and believe that Turing believed) that they will have to go beyond computation before they can become genuinely intelligent
    Lanier, Jaron (ms). Mindless thought experiments (a critique of machine intelligence).   (Google)
    Abstract: Since there isn't a computer that seems conscious at this time, the idea of machine consciousness is supported by thought experiments. Here's one old chestnut: "What if you replaced your neurons one by one with neuron sized and shaped substitutes made of silicon chips that perfectly mimicked the chemical and electric functions of the originals? If you just replaced one single neuron, surely you'd feel the same. As you proceed, as more and more neurons are replaced, you'd stay conscious. Why wouldn't you still be conscious at the end of the process, when you'd reside in a brain shaped glob of silicon? And why couldn't the resulting replacement brain have been manufactured by some other means?"
    Lanier, Jaron (1998). Three objections to the idea of artificial intelligence. In Stuart R. Hameroff, Alfred W. Kaszniak & A. C. Scott (eds.), Toward a Science of Consciousness II. MIT Press.   (Google)
    Laymon, Ronald E. (1988). Some computers can add (even if the IBM 1620 couldn't): Defending eniac's accumulators against Dretske. Behaviorism 16:1-16.   (Google)
    Lind, Richard W. (1986). The priority of attention: Intentionality for automata. The Monist 69 (October):609-619.   (Cited by 1 | Google)
    Long, Douglas C. (1994). Why Machines Can Neither Think nor Feel. In Dale W. Jamieson (ed.), Language, Mind and Art. Kluwer.   (Cited by 1 | Google)
    Abstract: Over three decades ago, in a brief but provocative essay, Paul Ziff argued for the thesis that robots cannot have feelings because they are "mechanisms, not organisms, not living creatures. There could be a broken-down robot but not a dead one. Only living creatures can literally have feelings."[i] Since machines are not living things they cannot have feelings
    Mackay, Donald M. (1951). Mind-life behavior in artifacts. British Journal for the Philosophy of Science 2 (August):105-21.   (Google | More links)
    Mackay, Donald M. (1952). Mentality in machines. Proceedings of the Aristotelian Society 26:61-86.   (Cited by 11 | Google)
    Mackay, Donald M. (1952). Mentality in machines, part III. Proceedings of the Aristotelian Society 61:61-86.   (Google)
    Mackay, Donald M. (1962). The use of behavioural language to refer to mechanical processes. British Journal for the Philosophy of Science 13 (August):89-103.   (Cited by 9 | Google | More links)
    Manning, Rita C. (1987). Why Sherlock Holmes can't be replaced by an expert system. Philosophical Studies 51 (January):19-28.   (Cited by 3 | Annotation | Google | More links)
    Mays, W. (1952). Can machines think? Philosophy 27 (April):148-62.   (Cited by 7 | Google)
    McCarthy, John (1979). Ascribing mental qualities to machines. In Martin Ringle (ed.), Philosophical Perspectives in Artificial Intelligence. Humanities Press.   (Cited by 168 | Google | More links)
    Abstract: Ascribing mental qualities like beliefs, intentions and wants to a machine is sometimes correct if done conservatively and is sometimes necessary to express what is known about its state. We propose some new definitional tools for this: definitions relative to an approximate theory and second order structural definitions
    McNamara, Paul (1993). Comments on can intelligence be artificial? Philosophical Studies 71 (2):217-222.   (Google | More links)
    Minsky, Marvin L. (1968). Matter, minds, models. In Marvin L. Minsky (ed.), Semantic Information Processing. MIT Press.   (Cited by 18 | Google)
    Minsky, Marvin L. (1982). Why people think computers can't. AI Magazine Fall 1982.   (Cited by 32 | Google | More links)
    Abstract: Most people think computers will never be able to think. That is, really think. Not now or ever. To be sure, most people also agree that computers can do many things that a person would have to be thinking to do. Then how could a machine seem to think but not actually think? Well, setting aside the question of what thinking actually is, I think that most of us would answer that by saying that in these cases, what the computer is doing is merely a superficial imitation of human intelligence. It has been designed to obey certain simple commands, and then it has been provided with programs composed of those commands. Because of this, the computer has to obey those commands, but without any idea of what's happening
    Nanay, Bence (2006). Symmetry between the intentionality of minds and machines? The biological plausibility of Dennett's position. Minds and Machines 16 (1):57-71.   (Google | More links)
    Abstract: One of the most influential arguments against the claim that computers can think is that while our intentionality is intrinsic, that of computers is derived: it is parasitic on the intentionality of the programmer who designed the computer-program. Daniel Dennett chose a surprising strategy for arguing against this asymmetry: instead of denying that the intentionality of computers is derived, he endeavours to argue that human intentionality is derived too. I intend to examine that biological plausibility of Dennett’s suggestion and show that Dennett’s argument for the claim that human intentionality is derived because it was designed by natural selection is based on the misunderstanding of how natural selection works
    Negley, Glenn (1951). Cybernetics and theories of mind. Journal of Philosophy 48 (September):574-82.   (Cited by 2 | Google | More links)
    Pinsky, Leonard (1951). Do machines think about machines thinking? Mind 60 (July):397-398.   (Google | More links)
    Preston, Beth (1995). The ontological argument against the mind-machine hypothesis. Philosophical Studies 80 (2):131-57.   (Annotation | Google | More links)
    Proudfoot, Diane (2004). The implications of an externalist theory of rule-following behavior for robot cognition. Minds and Machines 14 (3):283-308.   (Google | More links)
    Abstract:   Given (1) Wittgensteins externalist analysis of the distinction between following a rule and behaving in accordance with a rule, (2) prima facie connections between rule-following and psychological capacities, and (3) pragmatic issues about training, it follows that most, even all, future artificially intelligent computers and robots will not use language, possess concepts, or reason. This argument suggests that AIs traditional aim of building machines with minds, exemplified in current work on cognitive robotics, is in need of substantial revision
    Puccetti, Roland (1966). Can humans think? Analysis 26 (June):198-202.   (Google)
    Putnam, Hilary (1967). The mental life of some machines. In Hector-Neri Castaneda (ed.), Intentionality, Minds and Perception. Wayne State University Press.   (Cited by 37 | Annotation | Google)
    Pylyshyn, Zenon W. (1975). Minds, machines and phenomenology: Some reflections on Dreyfus' What Computers Can't Do. Cognition 3:57-77.   (Cited by 7 | Google)
    Rapaport, William J. (1993). Because mere calculating isn't thinking: Comments on Hauser's Why Isn't My Pocket Calculator a Thinking Thing?. Minds and Machines 3 (1):11-20.   (Cited by 5 | Google | More links)
    Rapaport, William J. (online). Computer processes and virtual persons: Comments on Cole's "artificial intelligence and personal identity".   (Cited by 7 | Google | More links)
    Abstract: This is a draft of the written version of comments on a paper by David Cole, presented orally at the American Philosophical Association Central Division meeting in New Orleans, 27 April 1990. Following the written comments are 2 appendices: One contains a letter to Cole updating these comments. The other is the handout from the oral presentation
    Ritchie, Graeme (2007). Some empirical criteria for attributing creativity to a computer program. Minds and Machines 17 (1).   (Google | More links)
    Abstract: Over recent decades there has been a growing interest in the question of whether computer programs are capable of genuinely creative activity. Although this notion can be explored as a purely philosophical debate, an alternative perspective is to consider what aspects of the behaviour of a program might be noted or measured in order to arrive at an empirically supported judgement that creativity has occurred. We sketch out, in general abstract terms, what goes on when a potentially creative program is constructed and run, and list some of the relationships (for example, between input and output) which might contribute to a decision about creativity. Specifically, we list a number of criteria which might indicate interesting properties of a program’s behaviour, from the perspective of possible creativity. We go on to review some ways in which these criteria have been applied to actual implementations, and some possible improvements to this way of assessing creativity
    Ronald, E. & Sipper, Moshe (2001). Intelligence is not enough: On the socialization of talking machines. Minds and Machines 11 (4):567-576.   (Cited by 3 | Google | More links)
    Abstract:   Since the introduction of the imitation game by Turing in 1950 there has been much debate as to its validity in ascertaining machine intelligence. We wish herein to consider a different issue altogether: granted that a computing machine passes the Turing Test, thereby earning the label of ``Turing Chatterbox'', would it then be of any use (to us humans)? From the examination of scenarios, we conclude that when machines begin to participate in social transactions, unresolved issues of trust and responsibility may well overshadow any raw reasoning ability they possess
    Baker, Lynne Rudder (1981). Why computers can't act. American Philosophical Quarterly 18 (April):157-163.   (Cited by 6 | Google)
    Schmidt, C. T. A. (2005). Of robots and believing. Minds and Machines 15 (2):195-205.   (Cited by 6 | Google | More links)
    Abstract: Discussion about the application of scientific knowledge in robotics in order to build people helpers is widespread. The issue herein addressed is philosophically poignant, that of robots that are “people”. It is currently popular to speak about robots and the image of Man. Behind this lurks the dialogical mind and the questions about the significance of an artificial version of it. Without intending to defend or refute the discourse in favour of ‘recreating’ Man, a lesser familiar question is brought forth: “and what if we were capable of creating a very convincible replica of man (constructing a robot-person), what would the consequences of this be and would we be satisfied with such technology?” Thorny topic; it questions the entire knowledge foundation upon which strong AI/Robotics is positioned. The author argues for improved monitoring of technological progress and thus favours implementing weaker techniques
    Scriven, Michael (1960). The compleat robot: A prolegomena to androidology. In Sidney Hook (ed.), Dimensions of Mind. New York University Press.   (Cited by 6 | Annotation | Google)
    Scriven, Michael (1963). The supercomputer as liar. British Journal for the Philosophy of Science 13 (February):313-314.   (Google | More links)
    Selinger, Evan (2008). Collins's incorrect depiction of Dreyfus's critique of artificial intelligence. Phenomenology and the Cognitive Sciences 7 (2).   (Google)
    Abstract: Harry Collins interprets Hubert Dreyfus’s philosophy of embodiment as a criticism of all possible forms of artificial intelligence. I argue that this characterization is inaccurate and predicated upon a misunderstanding of the relevance of phenomenology for empirical scientific research
    Sloman, Aaron (1986). What sorts of machines can understand the symbols they use? Proceedings of the Aristotelian Society 61:61-80.   (Cited by 4 | Google)
    Spilsbury, R. J. (1952). Mentality in machines. Proceedings of the Aristotelian Society 26:27-60.   (Cited by 2 | Google)
    Spilsbury, R. J. (1952). Mentality in machines, part II. Proceedings of the Aristotelian Society 27:27-60.   (Google)
    Srzednicki, Jan (1962). Could machines talk? Analysis 22 (April):113-117.   (Google)
    Stahl, Bernd Carsten (2006). Responsible computers? A case for ascribing quasi-responsibility to computers independent of personhood or agency. Ethics and Information Technology 8 (4):205-213.   (Google | More links)
    Abstract: There has been much debate whether computers can be responsible. This question is usually discussed in terms of personhood and personal characteristics, which a computer may or may not possess. If a computer fulfils the conditions required for agency or personhood, then it can be responsible; otherwise not. This paper suggests a different approach. An analysis of the concept of responsibility shows that it is a social construct of ascription which is only viable in certain social contexts and which serves particular social aims. If this is the main aspect of responsibility then the question whether computers can be responsible no longer hinges on the difficult problem of agency but on the possibly simpler question whether responsibility ascriptions to computers can fulfil social goals. The suggested solution to the question whether computers can be subjects of responsibility is the introduction of a new concept, called “quasi-responsibility” which will emphasise the social aim of responsibility ascription and which can be applied to computers
    Tallis, Raymond C. (2004). Why the Mind Is Not a Computer: A Pocket Lexicon of Neuromythology. Thorverton UK: Imprint Academic.   (Cited by 1 | Google | More links)
    Abstract: Taking a series of key words such as calculation, language, information and memory, Professor Tallis shows how their misuse has lured a whole generation into...
    Taube, M. (1961). Computers And Common Sense: The Myth Of Thinking Machines. Ny: Columbia University Press.   (Cited by 12 | Google)
    Velleman, J. David (online). Artificial agency.   (Google | More links)
    Abstract: I argue that participants in a virtual world such as "Second Life" exercise genuine agency via their avatars. Indeed, their avatars are fictional bodies with which they act in the virtual world, just as they act in the real world with their physical bodies. Hence their physical bodies can be regarded as their default avatars. I also discuss recent research into "believable" software agents, which are designed on principles borrowed from the character-based arts, especially cinematic animation as practiced by the artists at Disney and Warner Brothers Studios. I claim that these agents exemplify a kind of autonomy that should be of greater interest to philosophers than that exemplified by the generic agent modeled in current philosophical theory. The latter agent is autonomous by virtue of being governed by itself; but a believable agent appears to be governed by a self, which is the anima by which it appears to be animated. Putting these two discussions together, I suggest that philosophers of action should focus their attention on how we animate our bodies
    Wait, Eldon C. (2006). What computers could never do. In Analecta Husserliana: The Yearbook of Phenomenological Research, Volume XD:Artificial Intelligence;Experience;Premise;Searle, John R. Dordrecht: Springer.   (Google)
    Waldrop, Mitchell (1990). Can computers think? In R. Kurzweil (ed.), The Age of Intelligent Machines. MIT Press.   (Cited by 2 | Google)
    Wallace, Rodrick (ms). New mathematical foundations for AI and alife: Are the necessary conditions for animal consciousness sufficient for the design of intelligent machines?   (Google | More links)
    Abstract: Rodney Brooks' call for 'new mathematics' to revitalize the disciplines of artificial intelligence and artificial life can be answered by adaptation of what Adams has called 'the informational turn in philosophy', aided by the novel perspectives that program gives regarding empirical studies of animal cognition and consciousness. Going backward from the necessary conditions communication theory imposes on animal cognition and consciousness to sufficient conditions for machine design is, however, an extraordinarily difficult engineering task. The most likely use of the first generations of conscious machines will be to model the various forms of psychopathology, since we have little or no understanding of how consciousness is stabilized in humans or other animals
    Weiss, Paul A. (1990). On the impossibility of artificial intelligence. Review of Metaphysics (December) 335 (December):335-341.   (Google)
    Whiteley, C. H. (1956). Note on the concept of mind. Analysis 16 (January):68-70.   (Google)
    Whobrey, Darren (2001). Machine mentality and the nature of the ground relation. Minds and Machines 11 (3):307-346.   (Cited by 7 | Google | More links)
    Abstract:   John Searle distinguished between weak and strong artificial intelligence (AI). This essay discusses a third alternative, mild AI, according to which a machine may be capable of possessing a species of mentality. Using James Fetzer's conception of minds as semiotic systems, the possibility of what might be called ``mild AI'' receives consideration. Fetzer argues against strong AI by contending that digital machines lack the ground relationship required of semiotic systems. In this essay, the implementational nature of semiotic processes posited by Charles S. Peirce's triadic sign relation is re-examined in terms of the underlying dispositional processes and the ontological levels they would span in an inanimate machine. This suggests that, if non-human mentality can be replicated rather than merely simulated in a digital machine, the direction to pursue appears to be that of mild AI
    Wilks, Yorick (1976). Dreyfus's disproofs. Britis Journal for the Philosophy of Science 27 (2).   (Cited by 1 | Google | More links)
    Wisdom, John O. (1952). Mentality in machines, part I. Proceedings of the Aristotelian Society 1:1-26.   (Google)

    6.2 Computation and Representation

    6.2a Symbols and Symbol Systems

    Boyle, C. Franklin (2001). Transduction and degree of grounding. Psycoloquy 12 (36).   (Cited by 2 | Google | More links)
    Abstract: While I agree in general with Stevan Harnad's symbol grounding proposal, I do not believe "transduction" (or "analog process") PER SE is useful in distinguishing between what might best be described as different "degrees" of grounding and, hence, for determining whether a particular system might be capable of cognition. By 'degrees of grounding' I mean whether the effects of grounding go "all the way through" or not. Why is transduction limited in this regard? Because transduction is a physical process which does not speak to the issue of representation, and, therefore, does not explain HOW the informational aspects of signals impinging on sensory surfaces become embodied as symbols or HOW those symbols subsequently cause behavior, both of which, I believe, are important to grounding and to a system's cognitive capacity. Immunity to Searle's Chinese Room (CR) argument does not ensure that a particular system is cognitive, and whether or not a particular degree of groundedness enables a system to pass the Total Turing Test (TTT) may never be determined
    Bringsjord, Selmer (online). People are infinitary symbol systems: No sensorimotor capacity necessary.   (Cited by 2 | Google | More links)
    Abstract: Stevan Harnad and I seem to be thinking about many of the same issues. Sometimes we agree, sometimes we don't; but I always find his reasoning refreshing, his positions sensible, and the problems with which he's concerned to be of central importance to cognitive science. His "Grounding Symbols in the Analog World with Neural Nets" (= GS) is no exception. And GS not only exemplifies Harnad's virtues, it also provides a springboard for diving into Harnad- Bringsjord terrain
    Clark, Andy (2006). Material symbols. Philosophical Psychology 19 (3):291-307.   (Cited by 4 | Google | More links)
    Abstract: What is the relation between the material, conventional symbol structures that we encounter in the spoken and written word, and human thought? A common assumption, that structures a wide variety of otherwise competing views, is that the way in which these material, conventional symbol-structures do their work is by being translated into some kind of content-matching inner code. One alternative to this view is the tempting but thoroughly elusive idea that we somehow think in some natural language (such as English). In the present treatment I explore a third option, which I shall call the "complementarity" view of language. According to this third view the actual symbol structures of a given language add cognitive value by complementing (without being replicated by) the more basic modes of operation and representation endemic to the biological brain. The "cognitive bonus" that language brings is, on this model, not to be cashed out either via the ultimately mysterious notion of "thinking in a given natural language" or via some process of exhaustive translation into another inner code. Instead, we should try to think in terms of a kind of coordination dynamics in which the forms and structures of a language qua material symbol system play a key and irreducible role. Understanding language as a complementary cognitive resource is, I argue, an important part of the much larger project (sometimes glossed in terms of the "extended mind") of understanding human cognition as essentially and multiply hybrid: as involving a complex interplay between internal biological resources and external non-biological resources
    Cummins, Robert E. (1996). Why there is no symbol grounding problem? In Representations, Targets, and Attitudes. MIT Press.   (Google)
    Harnad, Stevan (1992). Connecting object to symbol in modeling cognition. In A. Clark & Ronald Lutz (eds.), Connectionism in Context. Springer-Verlag.   (Cited by 61 | Annotation | Google | More links)
    Abstract: Connectionism and computationalism are currently vying for hegemony in cognitive modeling. At first glance the opposition seems incoherent, because connectionism is itself computational, but the form of computationalism that has been the prime candidate for encoding the "language of thought" has been symbolic computationalism (Dietrich 1990, Fodor 1975, Harnad 1990c; Newell 1980; Pylyshyn 1984), whereas connectionism is nonsymbolic (Fodor & Pylyshyn 1988, or, as some have hopefully dubbed it, "subsymbolic" Smolensky 1988). This paper will examine what is and is not a symbol system. A hybrid nonsymbolic/symbolic system will be sketched in which the meanings of the symbols are grounded bottom-up in the system's capacity to discriminate and identify the objects they refer to. Neural nets are one possible mechanism for learning the invariants in the analog sensory projection on which successful categorization is based. "Categorical perception" (Harnad 1987a), in which similarity space is "warped" in the service of categorization, turns out to be exhibited by both people and nets, and may mediate the constraints exerted by the analog world of objects on the formal world of symbols
    Harnad, Stevan (2002). Symbol grounding and the origin of language. In Matthias Scheutz (ed.), Computationalism: New Directions. MIT Press.   (Cited by 12 | Google | More links)
    Abstract: What language allows us to do is to "steal" categories quickly and effortlessly through hearsay instead of having to earn them the hard way, through risky and time-consuming sensorimotor "toil" (trial-and-error learning, guided by corrective feedback from the consequences of miscategorisation). To make such linguistic "theft" possible, however, some, at least, of the denoting symbols of language must first be grounded in categories that have been earned through sensorimotor toil (or else in categories that have already been "prepared" for us through Darwinian theft by the genes of our ancestors); it cannot be linguistic theft all the way down. The symbols that denote categories must be grounded in the capacity to sort, label and interact with the proximal sensorimotor projections of their distal category-members in a way that coheres systematically with their semantic interpretations, both for individual symbols, and for symbols strung together to express truth-value-bearing propositions
    Harnad, Stevan (ms). Symbol grounding is an empirical problem: Neural nets are just a candidate component.   (Cited by 27 | Google | More links)
    Abstract: "Symbol Grounding" is beginning to mean too many things to too many people. My own construal has always been simple: Cognition cannot be just computation, because computation is just the systematically interpretable manipulation of meaningless symbols, whereas the meanings of my thoughts don't depend on their interpretability or interpretation by someone else. On pain of infinite regress, then, symbol meanings must be grounded in something other than just their interpretability if they are to be candidates for what is going on in our heads. Neural nets may be one way to ground the names of concrete objects and events in the capacity to categorize them (by learning the invariants in their sensorimotor projections). These grounded elementary symbols could then be combined into symbol strings expressing propositions about more abstract categories. Grounding does not equal meaning, however, and does not solve any philosophical problems
    Harnad, Stevan (1990). The symbol grounding problem. [Journal (Paginated)] 42:335-346.   (Cited by 1265 | Annotation | Google | More links)
    Abstract: There has been much discussion recently about the scope and limits of purely symbolic models of the mind and about the proper role of connectionism in cognitive modeling. This paper describes the symbol grounding problem: How can the semantic interpretation of a formal symbol system be made intrinsic to the system, rather than just parasitic on the meanings in our heads? How can the meanings of the meaningless symbol tokens, manipulated solely on the basis of their (arbitrary) shapes, be grounded in anything but other meaningless symbols? The problem is analogous to trying to learn Chinese from a Chinese/Chinese dictionary alone. A candidate solution is sketched: Symbolic representations must be grounded bottom-up in nonsymbolic representations of two kinds: (1) iconic representations, which are analogs of the proximal sensory projections of distal objects and events, and (2) categorical representations, which are learned and innate feature-detectors that pick out the invariant features of object and event categories from their sensory projections. Elementary symbols are the names of these object and event categories, assigned on the basis of their (nonsymbolic) categorical representations. Higher-order (3) symbolic representations, grounded in these elementary symbols, consist of symbol strings describing category membership relations (e.g., An X is a Y that is Z). Connectionism is one natural candidate for the mechanism that learns the invariant features underlying categorical representations, thereby connecting names to the proximal projections of the distal objects they stand for. In this way connectionism can be seen as a complementary component in a hybrid nonsymbolic/symbolic model of the mind, rather than a rival to purely symbolic modeling. Such a hybrid model would not have an autonomous symbolic module, however; the symbolic functions would emerge as an intrinsically dedicated symbol system as a consequence of the bottom-up grounding of categories' names in their sensory representations. Symbol manipulation would be governed not just by the arbitrary shapes of the symbol tokens, but by the nonarbitrary shapes of the icons and category invariants in which they are grounded
    Kosslyn, Stephen M. & Hatfield, Gary (1984). Representation without symbol systems. Social Research 51:1019-1045.   (Cited by 15 | Google)
    Lumsden, David (2005). How can a symbol system come into being? Dialogue 44 (1):87-96.   (Google)
    Abstract: One holistic thesis about symbols is that a symbol cannot exist singly, but only as apart of a symbol system. There is also the plausible view that symbol systems emerge gradually in an individual, in a group, and in a species. The problem is that symbol holism makes it hard to see how a symbol system can emerge gradually, at least if we are considering the emergence of a first symbol system. The only way it seems possible is if being a symbol can be a matter of degree, which is initially problematic. This article explains how being a cognitive symbol can be a matter of degree after all. The contrary intuition arises from the way a process of interpretation forces an all-or-nothing character on symbols, leaving room for underlying material to realize symbols to different degrees in a way that Daniel Dennett’s work can help illuminate. Holism applies to symbols as interpreted, while gradualism applies to how the underlying material realizes symbols.Selon une thèse holistique sur les symboles, un symbole ne peut exister isolément mais doit faire partie d’un systéme symbolique. Une opinion, elle aussi plausible, veut que les systèmes symboliques émergent graduellement chez un individu, un groupe ou une espèce. Le problème c’est qu’on voit mal, si le holisme des systèmes symboliques tient, comment un système symbolique peut émerger graduellement, du moins pour la première fois. Ce n’est possible, semble-t-il, que si être un symbole est affaire de degré, thèse au départ problématique. Cet article explique comment être un symbole cognitif peut après tout être affaire de degré. L’intuition contraire vient de ce que le processus d’interprétation nous force au tout ou rien, ce qui laisse un jeu dans la façon dont le matériel sous-jacent réalise les symboles à des degrés divers. Les travaux de Daniel Dennett sont à cet égard éclairants. Le holisme vaut pour les symboles tels qu’ils sont interprétés, tandis que le gradualisme vaut pour la façon dont le matériel sous-jacent réalise les symboles
    MacDorman, Karl F. (1997). How to ground symbols adaptively. In S. O'Nuillain, Paul McKevitt & E. MacAogain (eds.), Two Sciences of Mind. John Benjamins.   (Cited by 1 | Google)
    Newell, Allen & Simon, Herbert A. (1981). Computer science as empirical inquiry: Symbols and search. Communications of the Association for Computing Machinery 19:113-26.   (Cited by 758 | Annotation | Google | More links)
    Newell, Allen (1980). Physical symbol systems. Cognitive Science 4:135-83.   (Cited by 469 | Google | More links)
    Pinker, Steven (2004). Why nature & nurture won't go away. Daedalus.   (Cited by 7 | Google | More links)
    Robinson, William S. (1995). Brain symbols and computationalist explanation. Minds and Machines 5 (1):25-44.   (Cited by 4 | Google | More links)
    Abstract:   Computationalist theories of mind require brain symbols, that is, neural events that represent kinds or instances of kinds. Standard models of computation require multiple inscriptions of symbols with the same representational content. The satisfaction of two conditions makes it easy to see how this requirement is met in computers, but we have no reason to think that these conditions are satisfied in the brain. Thus, if we wish to give computationalist explanations of human cognition, without committing ourselvesa priori to a strong and unsupported claim in neuroscience, we must first either explain how we can provide multiple brain symbols with the same content, or explain how we can abandon standard models of computation. It is argued that both of these alternatives require us to explain the execution of complex tasks that have a cognition-like structure. Circularity or regress are thus threatened, unless noncomputationalist principles can provide the required explanations. But in the latter case, we do not know that noncomputationalist principles might not bear most of the weight of explaining cognition. Four possible types of computationalist theory are discussed; none appears to provide a promising solution to the problem. Thus, despite known difficulties in noncomputationalist investigations, we have every reason to pursue the search for noncomputationalist principles in cognitive theory
    Roitblat, Herbert L. (2001). Computational grounding. Psycoloquy 12 (58).   (Cited by 1 | Google | More links)
    Abstract: Harnad defines computation to mean the manipulation of physical symbol tokens on the basis of syntactic rules defined over the shapes of the symbols, independent of what, if anything, those symbols represent. He is, of course, free to define terms in any way that he chooses, and he is very clear about what he means by computation, but I am uncomfortable with this definition. It excludes, at least at a functional level of description, much of what a computer is actually used for, and much of what the brain/mind does. When I toss a Frisbee to the neighbor's dog, the dog does not, I think, engage in a symbolic soliloquy about the trajectory of the disc, the wind's effects on it, and formulas for including lift and the acceleration due to gravity. There are symbolic formulas for each of these relations, but the dog insofar as I can tell, does not use any of these formulas. Nevertheless, it computes these factors in order to intercept the disc in the air. I argue that determining the solution to a differential equation is at least as much computation as is processing symbols. The disagreement is over what counts as computation, I think that Harnad and I both agree that the dog solves the trajectory problem implicitly. This definition is important, because, although Harnad offers a technical definition for what he means by computation, the folk- definition of the term is probably interpreted differently, and I believe this leads to trouble
    Schneider, Susan (2009). Lot, ctm, and the elephant in the room. Synthese 170 (2):235-250.   (Google | More links)
    Abstract: According to the language of thought (LOT) approach and the related computational theory of mind (CTM), thinking is the processing of symbols in an inner mental language that is distinct from any public language. Herein, I explore a deep problem at the heart of the LOT/CTM program—it has yet to provide a plausible conception of a mental symbol
    Schneider, Susan (forthcoming). The nature of primitive symbols in the language of thought. Mind and Language.   (Google | More links)
    Abstract: This paper provides a theory of the nature of symbols in the language of thought (LOT). My discussion consists in three parts. In part one, I provide three arguments for the individuation of primitive symbols in terms of total computational role. The first of these arguments claims that Classicism requires that primitive symbols be typed in this manner; no other theory of typing will suffice. The second argument contends that without this manner of symbol individuation, there will be computational processes that fail to supervene on syntax, together with the rules of composition and the computational algorithms. The third argument says that cognitive science needs a natural kind that is typed by total computational role. Otherwise, either cognitive science will be incomplete, or its laws will have counterexamples. Then, part two defends this view from a criticism, offered by both Jerry Fodor and Jesse Prinz, who respond to my view with the charge that because the types themselves are individuated
    Sun, Ron (2000). Symbol grounding: A new look at an old idea. Philosophical Psychology 13 (2):149-172.   (Cited by 39 | Google | More links)
    Abstract: Symbols should be grounded, as has been argued before. But we insist that they should be grounded not only in subsymbolic activities, but also in the interaction between the agent and the world. The point is that concepts are not formed in isolation (from the world), in abstraction, or "objectively." They are formed in relation to the experience of agents, through their perceptual/motor apparatuses, in their world and linked to their goals and actions. This paper takes a detailed look at this relatively old issue, with a new perspective, aided by our work of computational cognitive model development. To further our understanding, we also go back in time to link up with earlier philosophical theories related to this issue. The result is an account that extends from computational mechanisms to philosophical abstractions
    Taddeo, Mariarosaria & Floridi, Luciano (2008). A praxical solution of the symbol grounding problem. Minds and Machines.   (Google | More links)
    Abstract: This article is the second step in our research into the Symbol Grounding Problem (SGP). In a previous work, we defined the main condition that must be satisfied by any strategy in order to provide a valid solution to the SGP, namely the zero semantic commitment condition (Z condition). We then showed that all the main strategies proposed so far fail to satisfy the Z condition, although they provide several important lessons to be followed by any new proposal. Here, we develop a new solution of the SGP. It is called praxical in order to stress the key role played by the interactions between the agents and their environment. It is based on a new theory of meaning—Action-based Semantics (AbS)—and on a new kind of artificial agents, called two-machine artificial agents (AM²). Thanks to their architecture, AM2s implement AbS, and this allows them to ground their symbols semantically and to develop some fairly advanced semantic abilities, including the development of semantically grounded communication and the elaboration of representations, while still respecting the Z condition
    Thompson, Evan (1997). Symbol grounding: A bridge from artificial life to artificial intelligence. Brain and Cognition 34 (1):48-71.   (Cited by 8 | Google | More links)
    Abstract: This paper develops a bridge from AL issues about the symbol–matter relation to AI issues about symbol-grounding by focusing on the concepts of formality and syntactic interpretability. Using the DNA triplet-amino acid specification relation as a paradigm, it is argued that syntactic properties can be grounded as high-level features of the non-syntactic interactions in a physical dynamical system. This argu- ment provides the basis for a rebuttal of John Searle’s recent assertion that syntax is observer-relative (1990, 1992). But the argument as developed also challenges the classic symbol-processing theory of mind against which Searle is arguing, as well as the strong AL thesis that life is realizable in a purely computational medium. Finally, it provides a new line of support for the autonomous systems approach in AL and AI (Varela & Bourgine 1992a, 1992b). © 1997 Academic Press

    6.2b Computational Semantics

    Akman, Varol (1998). Situations and artificial intelligence. Minds and Machines 8 (4):475-477.   (Google)
    Blackburn, Patrick & Bos, Johan (2003). Computational semantics. Theoria: Revista de Teoría, Historia y Fundamentos de la Ciencia 18 (1):27-45.   (Google)
    Abstract: In this article we discuss what constitutes a good choice of semantic representation, compare different approaches of constructing semantic representations for fragments of natural language, and give an overview of recent methods for employing inference engines for natural language understanding tasks
    Blackburn, Patrick & Kohlhase, Michael (2004). Inference and computational semantics. Journal of Logic, Language and Information 13 (2).   (Google)
    Blackburn, Patrick (2005). Representation and Inference for Natural Language: A First Course in Computational Semantics. Center for the Study of Language and Information.   (Google)
    Abstract: How can computers distinguish the coherent from the unintelligible, recognize new information in a sentence, or draw inferences from a natural language passage? Computational semantics is an exciting new field that seeks answers to these questions, and this volume is the first textbook wholly devoted to this growing subdiscipline. The book explains the underlying theoretical issues and fundamental techniques for computing semantic representations for fragments of natural language. This volume will be an essential text for computer scientists, linguists, and anyone interested in the development of computational semantics
    Bogdan, Radu J. (1994). By way of means and ends. In Radu J. Bogdan (ed.), Grounds for Cognition. Lawrence Erlbaum.   (Google)
    Abstract: This chapter provides the teleological foundations for our analysis of guidance to goal. Its objective is to ground goal-directedness genetically. The basic suggestion is this. Organisms are small things, with few energy resources and puny physical means, battling a ruthless physical and biological nature. How do they manage to survive and multiply? CLEVERLY, BY ORGANIZING
    Bos, Johan (2004). Computational semantics in discourse: Underspecification, resolution, and inference. Journal of Logic, Language and Information 13 (2).   (Google)
    Abstract: In this paper I introduce a formalism for natural language understandingbased on a computational implementation of Discourse RepresentationTheory. The formalism covers a wide variety of semantic phenomena(including scope and lexical ambiguities, anaphora and presupposition),is computationally attractive, and has a genuine inference component. Itcombines a well-established linguistic formalism (DRT) with advancedtechniques to deal with ambiguity (underspecification), and isinnovative in the use of first-order theorem proving techniques.The architecture of the formalism for natural language understandingthat I advocate consists of three levels of processing:underspecification, resolution, andinference. Each of these levels has a distinct function andtherefore employs a different kind of semantic representation. Themappings between these different representations define the interfacesbetween the levels
    Charniak, Eugene & Wilks, Yorick (eds.) (1976). Computational Semantics: An Introduction to Artificial Intelligence and Natural Language Comprehension. Distributors for the U.S.A. And Canada, Elsevier/North Holland.   (Google)
    Szymanik, Jakub & Zajenkowski, Marcin (2009). Comprehension of Simple Quantifiers. Empirical Evaluation of a Computational Model. Cognitive Science: A Multidisciplinary Journal 34 (3):521-532.   (Google)
    Abstract: We examine the verification of simple quantifiers in natural language from a computational model perspective. We refer to previous neuropsychological investigations of the same problem and suggest extending their experimental setting. Moreover, we give some direct empirical evidence linking computational complexity predictions with cognitive reality.
    In the empirical study we compare time needed for understanding different types of quantifiers. We show that the computational distinction between quantifiers recognized by finite-automata and push-down automata is psychologically relevant. Our research improves upon hypothesis and explanatory power of recent neuroimaging studies as well as provides
    evidence
    Dennett, Daniel C. (2003). The Baldwin Effect: A Crane, Not a Skyhook. In Bruce H. Weber & D.J. Depew (eds.), And Learning: The Baldwin Effect Reconsidered. MIT Press.   (Cited by 6 | Google | More links)
    Abstract: In 1991, I included a brief discussion of the Baldwin effect in my account of the evolution of human consciousness, thinking I was introducing to non-specialist readers a little-appreciated, but no longer controversial, wrinkle in orthodox neo-Darwinism. I had thought that Hinton and Nowlan (1987) and Maynard Smith (1987) had shown clearly and succinctly how and why it worked, and restored the neglected concept to grace. Here is how I put it then
    Fodor, Jerry A. (1979). In reply to Philip Johnson-Laird's What's Wrong with Grandma's Guide to Procedural Semantics: A Reply to Jerry Fodor. Cognition 7 (March):93-95.   (Google)
    Fodor, Jerry A. (1978). Tom swift and his procedural grandmother. Cognition 6 (September):229-47.   (Cited by 24 | Annotation | Google)
    Hadley, Robert F. (1990). Truth conditions and procedural semantics. In Philip P. Hanson (ed.), Information, Language and Cognition. University of British Columbia Press.   (Cited by 2 | Google)
    Harnad, Stevan (2002). Darwin, Skinner, Turing and the mind. Magyar Pszichologiai Szemle 57 (4):521-528.   (Google | More links)
    Abstract: Darwin differs from Newton and Einstein in that his ideas do not require a complicated or deep mind to understand them, and perhaps did not even require such a mind in order to generate them in the first place. It can be explained to any school-child (as Newtonian mechanics and Einsteinian relativity cannot) that living creatures are just Darwinian survival/reproduction machines. They have whatever structure they have through a combination of chance and its consequences: Chance causes changes in the genetic blueprint from which organisms' bodies are built, and if those changes are more successful in helping their owners survive and reproduce than their predecessors or their rivals, then, by definition, those changes are reproduced, and thereby become more prevalent in succeeding generations: Whatever survives/reproduces better survives/reproduces better. That is the tautological force that shaped us
    Johnson-Laird, Philip N. (1977). Procedural semantics. Cognition 5:189-214.   (Cited by 37 | Google)
    Johnson-Laird, Philip N. (1978). What's wrong with grandma's guide to procedural semantics: A reply to Jerry Fodor. Cognition 9 (September):249-61.   (Cited by 1 | Google)
    McDermott, Drew (1978). Tarskian semantics, or no notation without denotation. Cognitive Science 2:277-82.   (Cited by 33 | Annotation | Google | More links)
    Papineau, David (2006). The cultural origins of cognitive adaptations. Royal Institute of Philosophy Supplement.   (Google | More links)
    Abstract: According to an influential view in contemporary cognitive science, many human cognitive capacities are innate. The primary support for this view comes from ‘poverty of stimulus’ arguments. In general outline, such arguments contrast the meagre informational input to cognitive development with its rich informational output. Consider the ease with which humans acquire languages, become facile at attributing psychological states (‘folk psychology’), gain knowledge of biological kinds (‘folk biology’), or come to understand basic physical processes (‘folk physics’). In all these cases, the evidence available to a growing child is far too thin and noisy for it to be plausible that the underlying principles involved are derived from general learning mechanisms. This only alternative hypothesis seems to be that the child’s grasp of these principles is innate. (Cf. Laurence and Margolis, 2001.)
    Perlis, Donald R. (1991). Putting one's foot in one's head -- part 1: Why. Noûs 25 (September):435-55.   (Cited by 12 | Google | More links)
    Perlis, Donald R. (1994). Putting one's foot in one's head -- part 2: How. In Eric Dietrich (ed.), Thinking Computers and Virtual Persons. Academic Press.   (Google)
    Rapaport, William J. (1988). Syntactic semantics: Foundations of computational natural language understanding. In James H. Fetzer (ed.), Aspects of AI. Kluwer.   (Cited by 44 | Google)
    Rapaport, William J. (1995). Understanding understanding: Syntactic semantics and computational cognition. Philosophical Perspectives 9:49-88.   (Cited by 22 | Google | More links)
    Smith, B. (1988). On the semantics of clocks. In James H. Fetzer (ed.), Aspects of AI. Kluwer.   (Cited by 7 | Google)
    Smith, B. (1987). The correspondence continuum. Csli 87.   (Cited by 34 | Google)
    Szymanik, Jakub & Zajenkowski, Marcin (2009). Understanding Quantifiers in Language. In N. A. Taatgen & H. van Rijn (eds.), Proceedings of the 31st Annual Conference of the Cognitive Science Society.   (Google)
    Abstract: We compare time needed for understanding different types of quantifiers. We show that the computational distinction between quantifiers recognized by finite-automata and pushdown automata is psychologically relevant. Our research improves upon hypothesis and explanatory power of recent neuroimaging studies as well as provides evidence for the claim that human linguistic abilities are constrained by computational complexity.
    Tin, Erkan & Akman, Varol (1994). Computational situation theory. ACM SIGART Bulletin 5 (4):4-17.   (Cited by 15 | Google | More links)
    Abstract: Situation theory has been developed over the last decade and various versions of the theory have been applied to a number of linguistic issues. However, not much work has been done in regard to its computational aspects. In this paper, we review the existing approaches towards `computational situation theory' with considerable emphasis on our own research
    Wilks, Y. (1990). Form and content in semantics. Synthese 82 (3):329-51.   (Cited by 10 | Annotation | Google | More links)
    Abstract:   This paper continues a strain of intellectual complaint against the presumptions of certain kinds of formal semantics (the qualification is important) and their bad effects on those areas of artificial intelligence concerned with machine understanding of human language. After some discussion of the use of the term epistemology in artificial intelligence, the paper takes as a case study the various positions held by McDermott on these issues and concludes, reluctantly, that, although he has reversed himself on the issue, there was no time at which he was right
    Wilks, Y. (1982). Some thoughts on procedural semantics. In W. Lehnert (ed.), Strategies for Natural Language Processing. Lawrence Erlbaum.   (Cited by 12 | Google)
    Winograd, Terry (1985). Moving the semantic fulcrum. Linguistics and Philosophy 8 (February):91-104.   (Cited by 16 | Google | More links)
    Woods, W. (1986). Problems in procedural semantics. In Zenon W. Pylyshyn & W. Demopolous (eds.), Meaning and Cognitive Structure. Ablex.   (Cited by 2 | Annotation | Google)
    Woods, W. (1981). Procedural semantics as a theory of meaning. In A. Joshi, Bruce H. Weber & Ivan A. Sag (eds.), Elements of Discourse Understanding. Cambridge University Press.   (Cited by 33 | Google)

    6.2c Implicit/Explicit Rules and Representations

    Bechtel, William P. (forthcoming). Explanation: Mechanism, modularity, and situated cognition. In P. Robbins & M. Aydede (eds.), Cambridge Handbook of Situated Cognition. Cambridge University Press.   (Google)
    Abstract: The situated cognition movement has emerged in recent decades (although it has roots in psychologists working earlier in the 20th century including Vygotsky, Bartlett, and Dewey) largely in reaction to an approach to explaining cognition that tended to ignore the context in which cognitive activities typically occur. Fodor’s (1980) account of the research strategy of methodological solipsism, according to which only representational states within the mind are viewed as playing causal roles in producing cognitive activity, is an extreme characterization of this approach. (As Keith Gunderson memorably commented when Fodor first presented this characterization, it amounts to reversing behaviorism by construing the mind as a white box in a black world). Critics as far back as the 1970s and 1980s objected to many experimental paradigms in cognitive psychology as not being ecologically valid; that is, they maintained that the findings only applied to the artificial circumstances created in the laboratory and did not generalize to real world settings (Neisser, 1976; 1987). The situated cognition movement, however, goes much further than demanding ecologically valid experiments—it insists that an agent’s cognitive activities are inherently embedded and supported by dynamic interactions with the agent’s body and features of its environment
    Clark, Andy (1991). In defense of explicit rules. In William Ramsey, Stephen P. Stich & D. Rumelhart (eds.), Philosophy and Connectionist Theory. Lawrence Erlbaum.   (Cited by 11 | Annotation | Google)
    Cummins, Robert E. (1986). Inexplicit information. In Myles Brand & Robert M. Harnish (eds.), The Representation of Knowledge and Belief. University of Arizona Press.   (Cited by 13 | Annotation | Google)
    Davies, Martin (1995). Two notions of implicit rules. Philosophical Perspectives 9:153-83.   (Cited by 14 | Google | More links)
    Dennett, Daniel C. (1993). Review of F. Varela, E. Thompson and E. Rosch, The Embodied Mind. American Journal of Psychology 106:121-126.   (Google | More links)
    Abstract: Cognitive science, as an interdisciplinary school of thought, may have recently moved beyond the bandwagon stage onto the throne of orthodoxy, but it does not make a favorable first impression on many people. Familiar reactions on first encounters range from revulsion to condescending dismissal--very few faces in the crowd light up with the sense of "Aha! So that's how the mind works! Of course!" Cognitive science leaves something out, it seems; moreover, what it apparently leaves out is important, even precious. Boiled down to its essence, cognitive science proclaims that in one way or another our minds are computers, and this seems so mechanistic, reductionistic, intellectualistic, dry, philistine, unbiological. It leaves out emotion, or what philosophers call qualia, or value, or mattering, or . . . the soul. It doesn't explain what minds are so much as attempt to explain minds away
    Fulda, Joseph S. (2000). The logic of “improper cross”. Artificial Intelligence and Law 8 (4):337-341.   (Google)
    G. , Nagarjuna (2009). Collaborative creation of teaching-learning sequences and an Atlas of knowledge. Mathematics Teaching-Research Journal Online 3 (N3):23-40.   (Google | More links)
    Abstract: Our focus in the article is to introduce a simple methodology of generating teaching-learning sequences using the semantic network techinque, followed by the emergent properties of such a network and their implications for the teaching-learning process (didactics) with marginal notes on epistemological implications. A collaborative portal for teachers, which publishes a network of prerequisites for teaching/learning any concept or an activity is introduced. The article ends with an appeal to the global community to contribute prerequisites of any subject to complete the global roadmap for an altas being built on similar lines as Wikipedia. The portal is launched and waiting for community participation at http://www.gnowledge.org.
    Hadley, Robert F. (1993). Connectionism, explicit rules, and symbolic manipulation. Minds and Machines 3 (2):183-200.   (Cited by 13 | Google | More links)
    Abstract:   At present, the prevailing Connectionist methodology forrepresenting rules is toimplicitly embody rules in neurally-wired networks. That is, the methodology adopts the stance that rules must either be hard-wired or trained into neural structures, rather than represented via explicit symbolic structures. Even recent attempts to implementproduction systems within connectionist networks have assumed that condition-action rules (or rule schema) are to be embodied in thestructure of individual networks. Such networks must be grown or trained over a significant span of time. However, arguments are presented herein that humanssometimes follow rules which arevery rapidly assignedexplicit internal representations, and that humans possessgeneral mechanisms capable of interpreting and following such rules. In particular, arguments are presented that thespeed with which humans are able to follow rules ofnovel structure demonstrates the existence of general-purpose rule following mechanisms. It is further argued that the existence of general-purpose rule following mechanisms strongly indicates that explicit rule following is not anisolated phenomenon, but may well be a common and important aspect of cognition. The relationship of the foregoing conclusions to Smolensky''s view of explicit rule following is also explored. The arguments presented here are pragmatic in nature, and are contrasted with thekind of arguments developed by Fodor and Pylyshyn in their recent, influential paper
    Hadley, Robert F. (1990). Connectionism, rule-following, and symbolic manipulation. Proc AAAI 3 (2):183-200.   (Cited by 10 | Annotation | Google)
    Hadley, Robert F. (1995). The 'explicit-implicit' distinction. Minds and Machines 5 (2):219-42.   (Cited by 25 | Google | More links)
    Abstract:   Much of traditional AI exemplifies the explicit representation paradigm, and during the late 1980''s a heated debate arose between the classical and connectionist camps as to whether beliefs and rules receive an explicit or implicit representation in human cognition. In a recent paper, Kirsh (1990) questions the coherence of the fundamental distinction underlying this debate. He argues that our basic intuitions concerning explicit and implicit representations are not only confused but inconsistent. Ultimately, Kirsh proposes a new formulation of the distinction, based upon the criterion ofconstant time processing.The present paper examines Kirsh''s claims. It is argued that Kirsh fails to demonstrate that our usage of explicit and implicit is seriously confused or inconsistent. Furthermore, it is argued that Kirsh''s new formulation of the explicit-implicit distinction is excessively stringent, in that it banishes virtually all sentences of natural language from the realm of explicit representation. By contrast, the present paper proposes definitions for explicit and implicit which preserve most of our strong intuitions concerning straightforward uses of these terms. It is also argued that the distinction delineated here sustains the meaningfulness of the abovementioned debate between classicists and connectionists
    Kirsh, David (1990). When is information explicitly represented? In Philip P. Hanson (ed.), Information, Language and Cognition. University of British Columbia Press.   (Cited by 62 | Google)
    Martínez, Fernando & Ezquerro Martínez, Jesús (1998). Explicitness with psychological ground. Minds and Machines 8 (3):353-374.   (Cited by 1 | Google | More links)
    Abstract:   Explicitness has usually been approached from two points of view, labelled by Kirsh the structural and the process view, that hold opposite assumptions to determine when information is explicit. In this paper, we offer an intermediate view that retains intuitions from both of them. We establish three conditions for explicit information that preserve a structural requirement, and a notion of explicitness as a continuous dimension. A problem with the former accounts was their disconnection with psychological work on the issue. We review studies by Karmiloff-Smith, and Shanks and St. John to show that the proposed conditions have psychological grounds. Finally, we examine the problem of explicit rules in connectionist systems in the light of our framework
    Shapiro, Lawrence A. (ms). The embodied cognition research program.   (Cited by 1 | Google | More links)
    Abstract: Unifying traditional cognitive science is the idea that thinking is a process of symbol manipulation, where symbols lead both a syntactic and a semantic life. The syntax of a symbol comprises those properties in virtue of which the symbol undergoes rule-dictated transformations. The semantics of a symbol constitute the symbolsÕ meaning or representational content. Thought consists in the syntactically determined manipulation of symbols, but in a way that respects their semantics. Thus, for instance, a calculating computer sensitive only to the shape of symbols might produce the symbol Ô5Õ in response to the inputs Ô2Õ, Ô+Õ, and Ô3Õ. As far as the computer is concerned, these symbols have no meaning, but because of its program it will produce outputs that, to the user, Òmake senseÓ given the meanings the user attributes to the symbols
    Skokowski, Paul G. (1994). Can computers carry content "inexplicitly"? Minds and Machines 4 (3):333-44.   (Cited by 2 | Annotation | Google | More links)
    Abstract:   I examine whether it is possible for content relevant to a computer''s behavior to be carried without an explicit internal representation. I consider three approaches. First, an example of a chess playing computer carrying emergent content is offered from Dennett. Next I examine Cummins response to this example. Cummins says Dennett''s computer executes a rule which is inexplicitly represented. Cummins describes a process wherein a computer interprets explicit rules in its program, implements them to form a chess-playing device, then this device executes the rules in a way that exhibits them inexplicitly. Though this approach is intriguing, I argue that the chess-playing device cannot exist as imagined. The processes of interpretation and implementation produce explicit representations of the content claimed to be inexplicit. Finally, the Chinese Room argument is examined and shown not to save the notion of inexplicit information. This means the strategy of attributing inexplicit content to a computer which is executing a rule, fails
    Slezak, Peter (1999). Situated cognition. Perspectives on Cognitive Science.   (Cited by 22 | Google)
    Abstract: The self-advertising, at least, suggests that 'situated cognition' involves the most fundamental conceptual re-organization in AI and cognitive science, even appearing to deny that cognition is to be explained by mental representations. In their defence of the orthodox symbolic representational theory, A. Vera and H. Simon (1993) have rebutted many of these claims, but they overlook an important reading of situated arguments which may, after all, involve a revolutionary insight. I show that the whole debate turns on puzzles familiar from the history of philosophy and psychology and these may serve to clarify the current disputes
    Sutton, John (2000). The body and the brain. In S. Gaukroger, J. Schuster & J. Sutton (eds.), Descartes' Natural Philosophy. Routledge.   (Google)
    Abstract: Does self?knowledge help? A rationalist, presumably, thinks that it does: both that self?knowledge is possible and that, if gained through appropriate channels, it is desirable. Descartes notoriously claimed that, with appropriate methods of enquiry, each of his readers could become an expert on herself or himself. As well as the direct, first?person knowledge of self to which we are led in the Meditationes , we can also seek knowledge of our own bodies, and of the union of our minds and our bodies: the latter forms of self?knowledge are inevitably imperfect, but are no less important in guiding our conduct in the search after truth
    van Gelder, Tim (1998). Review: Being There: Body and World Together Again, by Andy Clark. Philosophical Review 107 (4):647-650.   (Google)
    Abstract: Are any nonhuman animals rational? What issues are we raising when we ask this question? Are there different kinds or levels of rationality, some of which fall short of full human rationality? Should any behaviour by nonhuman animals be regarded as rational? What kinds of tasks can animals successfully perform? What kinds of processes control their performance at these tasks, and do they count as rational processes? Is it useful or theoretically justified to raise questions about the rationality of animals at all? Should we be interested in whether they are rational? Why does it matter?

    6.2d AI without Representation?

    Andrews, Kristin (web). Critter psychology: On the possibility of nonhuman animal folk psychology. In Daniel D. Hutto & Matthew Ratcliffe (eds.), Folk Psychology Re-Assessed. Kluwer/Springer Press.   (Google | More links)
    Abstract: Humans have a folk psychology, without question. Paul Churchland used the term to describe “our commonsense conception of psychological phenomena” (Churchland 1981, p. 67), whatever that may be. When we ask the question whether animals have their own folk psychology, we’re asking whether any other species has a commonsense conception of psychological phenomenon as well. Different versions of this question have been discussed over the past 25 years, but no clear answer has emerged. Perhaps one reason for this lack of progress is that we don’t clearly understand the question. In asking whether animals have folk psychology, I hope to help clarify the concept of folk psychology itself, and in the process, to gain a greater understanding of the role of belief and desire attribution in human social interaction
    Bechtel, William P. (1996). Yet another revolution? Defusing the dynamical system theorists' attack on mental representations. Presidential Address to Society of Philosophy and Psychology.   (Cited by 1 | Google)
    Brooks, Rodney (1991). Intelligence without representation. Artificial Intelligence 47:139-159.   (Cited by 2501 | Annotation | Google | More links)
    Abstract: Artificial intelligence research has foundered on the issue of representation. When intelligence is approached in an incremental manner, with strict reliance on interfacing to the real world through perception and action, reliance on representation disappears. In this paper we outline our approach to incrementally building complete intelligent Creatures. The fundamental decomposition of the intelligent system is not into independent information processing units which must interface with each other via representations. Instead, the intelligent system is decomposed into independent and parallel activity producers which all interface directly to the world through perception and action, rather than interface to each other particularly much. The notions of central and peripheral systems evaporateeverything is both central and peripheral. Based on these principles we have built a very successful series of mobile robots which operate without supervision as Creatures in standard office environments
    Clark, Andy & Toribio, Josefa (1994). Doing without representing. Synthese 101 (3):401-31.   (Cited by 97 | Annotation | Google | More links)
    Abstract:   Connectionism and classicism, it generally appears, have at least this much in common: both place some notion of internal representation at the heart of a scientific study of mind. In recent years, however, a much more radical view has gained increasing popularity. This view calls into question the commitment to internal representation itself. More strikingly still, this new wave of anti-representationalism is rooted not in armchair theorizing but in practical attempts to model and understand intelligent, adaptive behavior. In this paper we first present, and then critically assess, a variety of recent anti-representationalist treatments. We suggest that so far, at least, the sceptical rhetoric outpaces both evidence and argument. Some probable causes of this premature scepticism are isolated. Nonetheless, the anti-representationalist challenge is shown to be both important and progressive insofar as it forces us to see beyond the bare representational/non-representational dichotomy and to recognize instead a rich continuum of degrees and types of representationality
    Dennett, Daniel C. (1989). Cognitive ethology. In Goals, No-Goals and Own Goals. Unwin Hyman.   (Cited by 15 | Google)
    Abstract: The field of Artificial Intelligence has produced so many new concepts--or at least vivid and more structured versions of old concepts--that it would be surprising if none of them turned out to be of value to students of animal behavior. Which will be most valuable? I will resist the temptation to engage in either prophecy or salesmanship; instead of attempting to answer the question: "How might Artificial Intelligence inform the study of animal behavior?" I will concentrate on the obverse: "How might the study of animal behavior inform research in Artificial Intelligence?"
    Millikan, Ruth G. (online). On reading signs.   (Cited by 1 | Google | More links)
    Abstract: On Reading Signs; Some Differences between Us and The Others If there are certain kinds of signs that an animal cannot learn to interpret, that might be for any of a number of reasons. It might be, first, because the animal cannot discriminate the signs from one another. For example, although human babies learn to discriminate human speech sounds according to the phonological structures of their native languages very easily, it may be that few if any other animals are capable of fully grasping the phonological structures of human languages. If an animal cannot learn to interpret certain signs it might be, second, because the decoding is too difficult for it. It could be, for example, that some animals are incapable of decoding signs that exhibit syntactic embedding, or signs that are spread out over time as opposed to over space. Problems of these various kinds might be solved by using another sign system, say, gestures rather than noises, or visual icons laid out in spatial order, or by separating out embedded propositions and presenting each separately. But a more interesting reason that an animal might be incapable of understanding a sign would be that it lacked mental representations of the necessary kind. It might be incapable of representing mentally what the sign conveys. When discussing what signs animals can understand or
    Keijzer, Fred A. (1998). Doing without representations which specify what to do. Philosophical Psychology 11 (3):269-302.   (Cited by 15 | Google)
    Abstract: A discussion is going on in cognitive science about the use of representations to explain how intelligent behavior is generated. In the traditional view, an organism is thought to incorporate representations. These provide an internal model that is used by the organism to instruct the motor apparatus so that the adaptive and anticipatory characteristics of behavior come about. So-called interactionists claim that this representational specification of behavior raises more problems than it solves. In their view, the notion of internal representational models is to be dispensed with. Instead, behavior is to be explained as the intricate interaction between an embodied organism and the specific make up of an environment. The problem with a non-representational interactive account is that it has severe difficulties with anticipatory, future oriented behavior. The present paper extends the interactionist conceptual framework by drawing on ideas derived from the study of morphogenesis. This extended interactionist framework is based on an analysis of anticipatory behavior as a process which involves multiple spatio-temporal scales of neural, bodily and environmental dynamics. This extended conceptual framework provides the outlines for an explanation of anticipatory behavior without involving a representational specification of future goal states
    Kirsh, David (1991). Today the earwig, tomorrow man? Artificial Intelligence 47:161-184.   (Cited by 111 | Google | More links)
    Abstract: A startling amount of intelligent activity can be controlled without reasoning or thought. By tuning the perceptual system to task relevant properties a creature can cope with relatively sophisticated environments without concepts. There is a limit, however, to how far a creature without concepts can go. Rod Brooks, like many ecologically oriented scientists, argues that the vast majority of intelligent behaviour is concept-free. To evaluate this position I consider what special benefits accrue to concept-using creatures. Concepts are either necessary for certain types of perception, learning, and control, or they make those processes computationally simpler. Once a creature has concepts its capacities are vastly multiplied.
    Müller, Vincent C. (2007). Is there a future for AI without representation? Minds and Machines 17 (1).   (Google | More links)
    Abstract: This paper investigates the prospects of Rodney Brooks’ proposal for AI without representation. It turns out that the supposedly characteristic features of “new AI” (embodiment, situatedness, absence of reasoning, and absence of representation) are all present in conventional systems: “New AI” is just like old AI. Brooks proposal boils down to the architectural rejection of central control in intelligent agents—Which, however, turns out to be crucial. Some of more recent cognitive science suggests that we might do well to dispose of the image of intelligent agents as central representation processors. If this paradigm shift is achieved, Brooks’ proposal for cognition without representation appears promising for full-blown intelligent agents—Though not for conscious agents
    van Gelder, Tim (1995). What might cognition be if not computation? Journal of Philosophy 92 (7):345-81.   (Cited by 266 | Annotation | Google | More links)
    Wallis, Peter (2004). Intention without representation. Philosophical Psychology 17 (2):209-223.   (Cited by 3 | Google | More links)
    Abstract: A mechanism for planning ahead would appear to be essential to any creature with more than insect level intelligence. In this paper it is shown how planning, using full means-ends analysis, can be had while avoiding the so called symbol grounding problem. The key role of knowledge representation in intelligence has been acknowledged since at least the enlightenment, but the advent of the computer has made it possible to explore the limits of alternate schemes, and to explore the nature of our everyday understanding of the world around us. In particular, artificial intelligence (AI) and robotics has forced a close examination, by people other than philosophers, of what it means to say for instance that "snow is white." One interpretation of the "new AI" is that it is questioning the need for representation altogether. Brooks and others have shown how a range of intelligent behaviors can be had without representation, and this paper goes one step further showing how intending to do things can be achieved without symbolic representation. The paper gives a concrete example of a mechanism in terms of robots that play soccer. It describes a belief, desire and intention (BDI) architecture that plans in terms of activities. The result is a situated agent that plans to do things with no more ontological commitment than the reactive systems Brooks described in his seminal paper, "Intelligence without Representation."
    Webber, Jonathan (2002). Doing without representation: Coping with Dreyfus. Philosophical Explorations 5 (1):82-88.   (Google | More links)
    Abstract: Hubert Dreyfus argues that the traditional and currently dominant conception of an action, as an event initiated or governed by a mental representation of a possible state of affairs that the agent is trying to realise, is inadequate. If Dreyfus is right, then we need a new conception of action. I argue, however, that the considerations that Dreyfus adduces show only that an action need not be initiated or governed by a conceptual representation, but since a representation need not be conceptually structured, do not show that we need a conception of action that does not involve representation

    6.2e Computation and Representation, Misc

    Akman, Varol & ten Hagen, Paul J. W. (1989). The power of physical representations. AI Magazine 10 (3):49-65.   (Cited by 10 | Google | More links)
    Bailey, Andrew R. (1994). Representations versus regularities: Does computation require representation? Eidos 12 (1):47-58.   (Google)
    Chalmers, David J.; French, Robert M. & Hofstadter, Douglas R. (1992). High-level perception, representation, and analogy:A critique of artificial intelligence methodology. Journal of Experimental and Theoretical Artificial Intellige 4 (3):185 - 211.   (Cited by 123 | Google | More links)
    Abstract: High-level perception--”the process of making sense of complex data at an abstract, conceptual level--”is fundamental to human cognition. Through high-level perception, chaotic environmen- tal stimuli are organized into the mental representations that are used throughout cognitive pro- cessing. Much work in traditional artificial intelligence has ignored the process of high-level perception, by starting with hand-coded representations. In this paper, we argue that this dis- missal of perceptual processes leads to distorted models of human cognition. We examine some existing artificial-intelligence models--”notably BACON, a model of scientific discovery, and the Structure-Mapping Engine, a model of analogical thought--”and argue that these are flawed pre- cisely because they downplay the role of high-level perception. Further, we argue that perceptu- al processes cannot be separated from other cognitive processes even in principle, and therefore that traditional artificial-intelligence models cannot be defended by supposing the existence of a --œrepresentation module--� that supplies representations ready-made. Finally, we describe a model of high-level perception and analogical thought in which perceptual processing is integrated with analogical mapping, leading to the flexible build-up of representations appropriate to a given context
    Dartnall, Terry (2000). Reverse psychologism, cognition and content. Minds and Machines 10 (1):31-52.   (Cited by 32 | Google | More links)
    Abstract:   The confusion between cognitive states and the content of cognitive states that gives rise to psychologism also gives rise to reverse psychologism. Weak reverse psychologism says that we can study cognitive states by studying content – for instance, that we can study the mind by studying linguistics or logic. This attitude is endemic in cognitive science and linguistic theory. Strong reverse psychologism says that we can generate cognitive states by giving computers representations that express the content of cognitive states and that play a role in causing appropriate behaviour. This gives us strong representational, classical AI (REPSCAI), and I argue that it cannot succeed. This is not, as Searle claims in his Chinese Room Argument, because syntactic manipulation cannot generate content. Syntactic manipulation can generate content, and this is abundantly clear in the Chinese Room scenano. REPSCAI cannot succeed because inner content is not sufficient for cognition, even when the representations that carry the content play a role in generating appropriate behaviour
    Dietrich, Eric (1988). Computers, intentionality, and the new dualism. Computers and Philosophy Newsletter.   (Google)
    Dreyfus, Hubert L. (1979). A framework for misrepresenting knowledge. In Martin Ringle (ed.), Philosophical Perspectives in Artificial Intelligence. Humanities Press.   (Cited by 7 | Annotation | Google)
    Echavarria, Ricardo Restrepo (2009). Russell's structuralism and the supposed death of computational cognitive science. Minds and Machines 19 (2).   (Google)
    Abstract: John Searle believes that computational properties are purely formal and that consequently, computational properties are not intrinsic, empirically discoverable, nor causal; and therefore, that an entity’s having certain computational properties could not be sufficient for its having certain mental properties. To make his case, Searle’s employs an argument that had been used before him by Max Newman, against Russell’s structuralism; one that Russell himself considered fatal to his own position. This paper formulates a not-so-explored version of Searle’s problem with computational cognitive science, and refutes it by suggesting how our understanding of computation is far from implying the structuralism Searle vitally attributes to it. On the way, I formulate and argue for a thesis that strengthens Newman’s case against Russell’s structuralism, and thus raises the apparent risk for computational cognitive science too
    Fields, Christopher A. (1994). Real machines and virtual intentionality: An experimentalist takes on the problem of representational content. In Eric Dietrich (ed.), Thinking Computers and Virtual Persons. Academic Press.   (Google)
    Franklin, James, The representation of context: Ideas from artificial intelligence.   (Google)
    Abstract: To move beyond vague platitudes about the importance of context in legal reasoning or natural language understanding, one must take account of ideas from artificial intelligence on how to represent context formally. Work on topics like prior probabilities, the theory-ladenness of observation, encyclopedic knowledge for disambiguation in language translation and pathology test diagnosis has produced a body of knowledge on how to represent context in artificial intelligence applications
    Fulda, Joseph S. (2000). The logic of “improper cross”. Artificial Intelligence and Law 8 (4):337-341.   (Google)
    Garzon, Francisco Calvo & Rodriguez, Angel Garcia (2009). Where is cognitive science heading? Minds and Machines.   (Google)
    Abstract: According to Ramsey (Representation reconsidered, Cambridge University Press, New York, 2007), only classical cognitive science, with the related notions of input–output and structural representations, meets the job description challenge (the challenge to show that a certain structure or process serves a representational role at the subpersonal level). By contrast, connectionism and other nonclassical models, insofar as they exploit receptor and tacit notions of representation, are not genuinely representational. As a result, Ramsey submits, cognitive science is taking a U-turn from representationalism back to behaviourism, thus presupposing that (1) the emergence of cognitivism capitalized on the concept of representation, and that (2) the materialization of nonclassical cognitive science involves a return to some form of pre-cognitivist behaviourism. We argue against both (1) and (2), by questioning Ramsey’s divide between classical and representational, versus nonclassical and nonrepresentational, cognitive models. For, firstly, connectionist and other nonclassical accounts have the resources to exploit the notion of a structural isomorphism, like classical accounts (the beefing-up strategy); and, secondly, insofar as input–output and structural representations refer to a cognitive agent, classical explanations fail to meet the job description challenge (the deflationary strategy). Both strategies work independently of each other: if the deflationary strategy succeeds, contra (1), cognitivism has failed to capitalize on the relevant concept of representation; if the beefing-up strategy is sound, contra (2), the return to a pre-cognitivist era cancels out.
    Guvenir, Halil A. & Akman, Varol (1992). Problem representation for refinement. Minds and Machines 2 (3):267-282.   (Google | More links)
    Abstract:   In this paper we attempt to develop a problem representation technique which enables the decomposition of a problem into subproblems such that their solution in sequence constitutes a strategy for solving the problem. An important issue here is that the subproblems generated should be easier than the main problem. We propose to represent a set of problem states by a statement which is true for all the members of the set. A statement itself is just a set of atomic statements which are binary predicates on state variables. Then, the statement representing the set of goal states can be partitioned into its subsets each of which becomes a subgoal of the resulting strategy. The techniques involved in partitioning a goal into its subgoals are presented with examples
    Haugeland, John (1981). Semantic engines: An introduction to mind design. In J. Haugel (ed.), Mind Design. MIT Press.   (Cited by 92 | Google)
    Marsh, Leslie (2005). Review Essay: Andy Clark's Natural-Born Cyborgs: Minds, Technologies, and the Future of Human Intelligence_. Cognitive Systems Research 6:405-409.   (Google)
    Abstract: The notion of the cyborg has exercised the popular imagination for almost two hundred years. In very general terms the idea that a living entity can be a hybrid of both organic matter and mechanical parts, and for all intents and purposes be seamlessly functional and self-regulating, was prefigured in literary works such as Shellys Frankenstein (1816/18) and Samuel Butlers Erewhon (1872). This notion of hybridism has been a staple theme of 20th century science fiction writing, television programmes and the cinema. For the most part, these works trade on a deep sense of unease we have about our personal identity – how could some non-organic matter to which I have so little conscious access count as a bona fide part of me? Cognitive scientist and philosopher, Andy Clark, picks up this general theme and presents an empirical and philosophical case for the following inextricably linked theses.
    Prem, Erich (2000). Changes of representational AI concepts induced by embodied autonomy. Communication and Cognition-Artificial Intelligence 17 (3-4):189-208.   (Cited by 4 | Google)
    Robinson, William S. (1995). Direct representation. Philosophical Studies 80 (3):305-22.   (Cited by 3 | Annotation | Google | More links)
    Shani, Itay (2005). Computation and intentionality: A recipe for epistemic impasse. Minds and Machines 15 (2):207-228.   (Cited by 1 | Google | More links)
    Abstract: Searle’s celebrated Chinese room thought experiment was devised as an attempted refutation of the view that appropriately programmed digital computers literally are the possessors of genuine mental states. A standard reply to Searle, known as the “robot reply” (which, I argue, reflects the dominant approach to the problem of content in contemporary philosophy of mind), consists of the claim that the problem he raises can be solved by supplementing the computational device with some “appropriate” environmental hookups. I argue that not only does Searle himself casts doubt on the adequacy of this idea by applying to it a slightly revised version of his original argument, but that the weakness of this encoding-based approach to the problem of intentionality can also be exposed from a somewhat different angle. Capitalizing on the work of several authors and, in particular, on that of psychologist Mark Bickhard, I argue that the existence of symbol-world correspondence is not a property that the cognitive system itself can appreciate, from its own perspective, by interacting with the symbol and therefore, not a property that can constitute intrinsic content. The foundational crisis to which Searle alluded is, I conclude, very much alive
    Stanley, Jason (2005). Review of Robyn Carston, Thoughts and Utterances. Mind and Language 20 (3).   (Google)
    Abstract: Relevance Theory is the influential theory of linguistic interpretation first championed by Dan Sperber and Deirdre Wilson. Relevance theorists have made important contributions to our understanding of a wide range of constructions, especially constructions that tend to receive less attention in semantics and philosophy of language. But advocates of Relevance Theory also have had a tendency to form a rather closed community, with an unwillingness to translate their own special vocabulary and distinctions into more neutral vernacular. Since Robyn Carston has long been the advocate of Relevance Theory most able to communicate with a broader philosophical and linguistic audience, it is with particular interest that the emergence of her long-awaited volume, Thoughts and Utterances has been greeted. The volume exhibits many of the strengths, but also some of the weaknesses, of this well-known program
    Thornton, Chris (1997). Brave mobots use representation: Emergence of representation in fight-or-flight learning. Minds and Machines 7 (4):475-494.   (Cited by 10 | Google | More links)
    Abstract:   The paper uses ideas from Machine Learning, Artificial Intelligence and Genetic Algorithms to provide a model of the development of a fight-or-flight response in a simulated agent. The modelled development process involves (simulated) processes of evolution, learning and representation development. The main value of the model is that it provides an illustration of how simple learning processes may lead to the formation of structures which can be given a representational interpretation. It also shows how these may form the infrastructure for closely-coupled agent/environment interaction

    6.3 Philosophy of Connectionism

    6.3a Connectionism and Compositionality

    54 / 55 entries displayed

    Aizawa, Kenneth (1997). Explaining systematicity. Mind and Language 12 (2):115-36.   (Cited by 48 | Google | More links)
    Aizawa, Kenneth (1997). Exhibiting verses explaining systematicity: A reply to Hadley and Hayward. Minds and Machines 7 (1):39-55.   (Google | More links)
    Aizawa, Kenneth (1997). The role of the systematicity argument in classicism and connectionism. In S. O'Nuallain (ed.), Two Sciences of Mind. John Benjamins.   (Cited by 4 | Google)
    Aizawa, Kenneth (2003). The Systematicity Arguments. Kluwer.   (Cited by 4 | Google)
    Abstract: The Systematicity Arguments is the only book-length treatment of the systematicity and productivity arguments.
    Antony, Michael V. (1991). Fodor and Pylyshyn on connectionism. Minds and Machines 1 (3):321-41.   (Cited by 3 | Annotation | Google | More links)
    Abstract:   Fodor and Pylyshyn (1988) have argued that the cognitive architecture is not Connectionist. Their argument takes the following form: (1) the cognitive architecture is Classical; (2) Classicalism and Connectionism are incompatible; (3) therefore the cognitive architecture is not Connectionist. In this essay I argue that Fodor and Pylyshyn's defenses of (1) and (2) are inadequate. Their argument for (1), based on their claim that Classicalism best explains the systematicity of cognitive capacities, is an invalid instance of inference to the best explanation. And their argument for (2) turns out to be question-begging. The upshot is that, while Fodor and Pylyshyn have presented Connectionists with the important empirical challenge of explaining systematicity, they have failed to provide sufficient reason for inferring that the cognitive architecture is Classical and not Connectionist
    Aydede, Murat (1997). Language of thought: The connectionist contribution. Minds and Machines 7 (1):57-101.   (Cited by 15 | Google | More links)
    Abstract:   Fodor and Pylyshyn's critique of connectionism has posed a challenge to connectionists: Adequately explain such nomological regularities as systematicity and productivity without postulating a "language of thought" (LOT). Some connectionists like Smolensky took the challenge very seriously, and attempted to meet it by developing models that were supposed to be non-classical. At the core of these attempts lies the claim that connectionist models can provide a representational system with a combinatorial syntax and processes sensitive to syntactic structure. They are not implementation models because, it is claimed, the way they obtain syntax and structure sensitivity is not "concatenative," hence "radically different" from the way classicists handle them. In this paper, I offer an analysis of what it is to physically satisfy/realize a formal system. In this context, I examine the minimal truth-conditions of LOT Hypothesis. From my analysis it will follow that concatenative realization of formal systems is irrelevant to LOTH since the very notion of LOT is indifferent to such an implementation level issue as concatenation. I