Javascript Menu by Deluxe-Menu.com
MindPapers is now part of PhilPapers: online research in philosophy, a new service with many more features.
 
 Compiled by David Chalmers (Editor) & David Bourget (Assistant Editor), Australian National University. Submit an entry.
 
   
click here for help on how to search

6.1a. The Turing Test (The Turing Test on PhilPapers)

See also:
Akman, Varol & Blackburn, Patrick (2000). Editorial: Alan Turing and artificial intelligence. Journal of Logic, Language and Information 9 (4):391-395.   (Cited by 2 | Google | More links)
Abstract: Department of Computer Engineering, Bilkent University, 06533 Ankara, Turkey E-mail: akman@cs.bilkent.edu.tr; http://www.cs.bilkent.edu.tr/?akman..
Alper, G. (1990). A psychoanalyst takes the Turing test. Psychoanalytic Review 77:59-68.   (Cited by 6 | Google)
Barresi, John (1987). Prospects for the cyberiad: Certain limits on human self-knowledge in the cybernetic age. Journal for the Theory of Social Behavior 17 (March):19-46.   (Cited by 6 | Google | More links)
Beenfeldt, Christian (2006). The Turing test: An examination of its nature and its mentalistic ontology. Danish Yearbook of Philosophy 40:109-144.   (Google)
Ben-Yami, Hanoch (2005). Behaviorism and psychologism: Why Block's argument against behaviorism is unsound. Philosophical Psychology 18 (2):179-186.   (Cited by 1 | Google | More links)
Abstract: Ned Block ((1981). Psychologism and behaviorism. Philosophical Review, 90, 5-43.) argued that a behaviorist conception of intelligence is mistaken, and that the nature of an agent's internal processes is relevant for determining whether the agent has intelligence. He did that by describing a machine which lacks intelligence, yet can answer questions put to it as an intelligent person would. The nature of his machine's internal processes, he concluded, is relevant for determining that it lacks intelligence. I argue against Block that it is not the nature of its processes but of its linguistic behavior which is responsible for his machine's lack of intelligence. As I show, not only has Block failed to establish that the nature of internal processes is conceptually relevant for psychology, in fact his machine example actually supports some version of behaviorism. As Wittgenstein has maintained, as far as psychology is concerned, there may be chaos inside
Block, Ned (1981). Psychologism and behaviorism. Philosophical Review 90 (1):5-43.   (Cited by 88 | Annotation | Google | More links)
Abstract: Let psychologism be the doctrine that whether behavior is intelligent behavior depends on the character of the internal information processing that produces it. More specifically, I mean psychologism to involve the doctrine that two systems could have actual and potential behavior _typical_ of familiar intelligent beings, that the two systems could be exactly alike in their actual and potential behavior, and in their behavioral dispositions and capacities and counterfactual behavioral properties (i.e., what behaviors, behavioral dispositions, and behavioral capacities they would have exhibited had their stimuli differed)--the two systems could be alike in all these ways, yet there could be a difference in the information processing that mediates their stimuli and responses that determines that one is not at all intelligent while the other is fully intelligent
Bringsjord, Selmer (2000). Animals, zombanimals, and the total Turing test: The essence of artificial intelligence. Journal of Logic Language and Information 9 (4):397-418.   (Cited by 32 | Google | More links)
Bringsjord, Selmer; Caporale, Clarke & Noel, Ron (2000). Animals, zombanimals, and the total Turing test. Journal of Logic, Language and Information 9 (4).   (Google)
Abstract: Alan Turing devised his famous test (TT) through a slight modificationof the parlor game in which a judge tries to ascertain the gender of twopeople who are only linguistically accessible. Stevan Harnad hasintroduced the Total TT, in which the judge can look at thecontestants in an attempt to determine which is a robot and which aperson. But what if we confront the judge with an animal, and arobot striving to pass for one, and then challenge him to peg which iswhich? Now we can index TTT to a particular animal and its syntheticcorrelate. We might therefore have TTTrat, TTTcat,TTTdog, and so on. These tests, as we explain herein, are abetter barometer of artificial intelligence (AI) than Turing's originalTT, because AI seems to have ammunition sufficient only to reach thelevel of artificial animal, not artificial person
Bringsjord, Selmer; Bello, P. & Ferrucci, David A. (2001). Creativity, the Turing test, and the (better) Lovelace test. Minds and Machines 11 (1):3-27.   (Cited by 11 | Google | More links)
Abstract:   The Turing Test (TT) is claimed by many to be a way to test for the presence, in computers, of such ``deep'' phenomena as thought and consciousness. Unfortunately, attempts to build computational systems able to pass TT (or at least restricted versions of this test) have devolved into shallow symbol manipulation designed to, by hook or by crook, trick. The human creators of such systems know all too well that they have merely tried to fool those people who interact with their systems into believing that these systems really have minds. And the problem is fundamental: the structure of the TT is such as to cultivate tricksters. A better test is one that insists on a certain restrictive epistemic relation between an artificial agent (or system) A, its output o, and the human architect H of A – a relation which, roughly speaking, obtains when H cannot account for how A produced o. We call this test the ``Lovelace Test'' in honor of Lady Lovelace, who believed that only when computers originate things should they be believed to have minds
Clark, Thomas W. (1992). The Turing test as a novel form of hermeneutics. International Studies in Philosophy 24 (1):17-31.   (Cited by 6 | Google)
Clifton, Andrew (ms). Blind man's bluff and the Turing test.   (Google)
Abstract: It seems plausible that under the conditions of the Turing test, congenitally blind people could nevertheless, with sufficient preparation, successfully represent themselves to remotely located interrogators as sighted. Having never experienced normal visual sensations, the successful blind player can prevail in this test only by playing a ‘lying game’—imitating the phenomenological claims of sighted people, in the absence of the qualitative visual experiences to which such statements purportedly refer. This suggests that a computer or robot might pass the Turing test in the same way, in the absence not only of visual experience, but qualitative consciousness in general. Hence, the standard Turing test does not provide a valid criterion for the presence of consciousness. A ‘sensorimetric’ version of the Turing test fares no better, for the apparent correlations we observe between cognitive functions and qualitative conscious experiences seems to be contingent, not necessary. We must therefore define consciousness not in terms of its causes and effects, but rather, in terms of the distinctive properties of its content, such as its possession of qualitative character and apparent intrinsic value—the property which confers upon consciousness its moral significance. As a means of determining whether or nor a machine is conscious, in this sense, an alternative to the standard Turing test is proposed
Copeland, B. Jack (2000). The Turing test. Minds and Machines 10 (4):519-539.   (Cited by 7 | Google | More links)
Abstract:   Turing''s test has been much misunderstood. Recently unpublished material by Turing casts fresh light on his thinking and dispels a number of philosophical myths concerning the Turing test. Properly understood, the Turing test withstands objections that are popularly believed to be fatal
Cowen, Tyler & Dawson, Michelle, What does the Turing test really mean? And how many human beings (including Turing) could pass?   (Google)
Abstract: The so-called Turing test, as it is usually interpreted, sets a benchmark standard for determining when we might call a machine intelligent. We can call a machine intelligent if the following is satisfied: if a group of wise observers were conversing with a machine through an exchange of typed messages, those observers could not tell whether they were talking to a human being or to a machine. To pass the test, the machine has to be intelligent but it also should be responsive in a manner which cannot be distinguished from a human being. This standard interpretation presents the Turing test as a criterion for demarcating intelligent from non-intelligent entities. For a long time proponents of artificial intelligence have taken the Turing test as a goalpost for measuring progress
Crawford, C. (1994). Notes on the Turing test. Communications of the Association for Computing Machinery 37 (June):13-15.   (Google)
Crockett, L. (1994). The Turing Test and the Frame Problem: AI's Mistaken Understanding of Intelligence. Ablex.   (Cited by 19 | Google)
Abstract: I have discussed the frame problem and the Turing test at length, but I have not attempted to spell out what I think the implications of the frame problem ...
Cutrona, Jr (ms). Zombies in Searle's chinese room: Putting the Turing test to bed.   (Google | More links)
Abstract: Searle’s discussions over the years 1980-2004 of the implications of his “Chinese Room” Gedanken experiment are frustrating because they proceed from a correct assertion: (1) “Instantiating a computer program is never by itself a sufficient condition of intentionality;” and an incorrect assertion: (2) “The explanation of how the brain produces intentionality cannot be that it does it by instantiating a computer program.” In this article, I describe how to construct a Gedanken zombie Chinese Room program that will pass the Turing test and at the same time unambiguously demonstrates the correctness of (1). I then describe how to construct a Gedanken Chinese brain program that will pass the Turing test, has a mind, and understands Chinese, thus demonstrating that (2) is incorrect. Searle’s instantiation of this program can and does produce intentionality. Searle’s longstanding ignorance of Chinese is simply irrelevant and always has been. I propose a truce and a plan for further exploration
Davidson, Donald (1990). Turing's test. In K. Said (ed.), Modelling the Mind. Oxford University Press.   (Google)
Dennett, Daniel C. (1984). Can machines think? In M. G. Shafto (ed.), How We Know. Harper & Row.   (Cited by 24 | Annotation | Google)
Drozdek, Adam (2001). Descartes' Turing test. Epistemologia 24 (1):5-29.   (Google)
Edmonds, Bruce (2000). The constructability of artificial intelligence (as defined by the Turing test). Journal of Logic Language and Information 9 (4):419-424.   (Google | More links)
Abstract: The Turing Test (TT), as originally specified, centres on theability to perform a social role. The TT can be seen as a test of anability to enter into normal human social dynamics. In this light itseems unlikely that such an entity can be wholly designed in anoff-line mode; rather a considerable period of training insitu would be required. The argument that since we can pass the TT,and our cognitive processes might be implemented as a Turing Machine(TM), that consequently a TM that could pass the TT could be built, isattacked on the grounds that not all TMs are constructible in a plannedway. This observation points towards the importance of developmentalprocesses that use random elements (e.g., evolution), but in these casesit becomes problematic to call the result artificial. This hasimplications for the means by which intelligent agents could bedeveloped
Edmonds, B. (ms). The constructability of artificial intelligence (as defined by the Turing test).   (Google | More links)
Abstract: The Turing Test, as originally specified, centres on the ability to perform a social role. The TT can seen as a test of an ability to enter into normal human social dynamics. In this light it seems unlikely that such an entity can be wholly designed in an `off-line' mode, but rather a considerable period of training in situ would be required. The argument that since we can pass the TT and our cognitive processes might be implemented as a TM that, in theory, an TM that could pass the TT could be built is attacked on the grounds that not all TMs are constructable in a planned way. This observation points towards the importance of developmental processes that include random elements (e.g. evolution), but in these cases it becomes problematic to call the result artificial
Erion, Gerald J. (2001). The cartesian test for automatism. Minds and Machines 11 (1):29-39.   (Cited by 5 | Google | More links)
Abstract:   In Part V of his Discourse on the Method, Descartes introduces a test for distinguishing people from machines that is similar to the one proposed much later by Alan Turing. The Cartesian test combines two distinct elements that Keith Gunderson has labeled the language test and the action test. Though traditional interpretation holds that the action test attempts to determine whether an agent is acting upon principles, I argue that the action test is best understood as a test of common sense. I also maintain that this interpretation yields a stronger test than Turing's, and that contemporary artificial intelligence should consider using it as a guide for future research
Floridi, Luciano (2005). Consciousness, agents and the knowledge game. Minds and Machines 15 (3):415-444.   (Cited by 2 | Google | More links)
Abstract: This paper has three goals. The first is to introduce the “knowledge game”, a new, simple and yet powerful tool for analysing some intriguing philosophical questions. The second is to apply the knowledge game as an informative test to discriminate between conscious (human) and conscious-less agents (zombies and robots), depending on which version of the game they can win. And the third is to use a version of the knowledge game to provide an answer to Dretske’s question “how do you know you are not a zombie?”
Floridi, Luciano & Taddeo, Mariarosaria (2009). Turing's imitation game: Still an impossible challenge for all machines and some judges––an evaluation of the 2008 loebner contest. Minds and Machines 19 (1).   (Google)
Abstract: An evaluation of the 2008 Loebner contest
Floridi, Luciano; Taddeo, Mariarosaria & Turilli, Matteo (2008). Turing’s Imitation Game: Still an Impossible Challenge for All Machines and Some Judges. Minds and Machines 19 (1):145-150.   (Google)
Abstract: An Evaluation of the 2008 Loebner Contest.
French, Robert M. (2000). Peeking behind the screen: The unsuspected power of the standard Turing test. Journal of Experimental and Theoretical Artificial Intelligence 12 (3):331-340.   (Cited by 10 | Google | More links)
Abstract: No computer that had not experienced the world as we humans had could pass a rigorously administered standard Turing Test. We show that the use of “subcognitive” questions allows the standard Turing Test to indirectly probe the human subcognitive associative concept network built up over a lifetime of experience with the world. Not only can this probing reveal differences in cognitive abilities, but crucially, even differences in _physical aspects_ of the candidates can be detected. Consequently, it is unnecessary to propose even harder versions of the Test in which all physical and behavioral aspects of the two candidates had to be indistinguishable before allowing the machine to pass the Test. Any machine that passed the “simpler” symbols- in/symbols-out test as originally proposed by Turing would be intelligent. The problem is that, even in its original form, the Turing Test is already too hard and too anthropocentric for any machine that was not a physical, social, and behavioral carbon copy of ourselves to actually pass it. Consequently, the Turing Test, even in its standard version, is not a reasonable test for general machine intelligence. There is no need for an even stronger version of the Test
French, Robert M. (1995). Refocusing the debate on the Turing test: A response. Behavior and Philosophy 23 (1):59-60.   (Cited by 3 | Annotation | Google)
French, Robert M. (1990). Subcognition and the limits of the Turing test. Mind 99 (393):53-66.   (Cited by 66 | Annotation | Google | More links)
French, Robert (1996). The inverted Turing test: How a mindless program could pass it. Psycoloquy 7 (39).   (Cited by 5 | Google | More links)
Abstract: This commentary attempts to show that the inverted Turing Test (Watt 1996) could be simulated by a standard Turing test and, most importantly, claims that a very simple program with no intelligence whatsoever could be written that would pass the inverted Turing test. For this reason, the inverted Turing test in its present form must be rejected
French, Robert (2000). The Turing test: The first fifty years. Trends in Cognitive Sciences 4 (3):115-121.   (Cited by 15 | Google | More links)
Abstract: The Turing Test, originally proposed as a simple operational definition of intelligence, has now been with us for exactly half a century. It is safe to say that no other single article in computer science, and few other articles in science in general, have generated so much discussion. The present article chronicles the comments and controversy surrounding Turing's classic article from its publication to the present. The changing perception of the Turing Test over the last fifty years has paralleled the changing attitudes in the scientific community towards artificial intelligence: from the unbridled optimism of 1960's to the current realization of the immense difficulties that still lie ahead. I conclude with the prediction that the Turing Test will remain important, not only as a landmark in the history of the development of intelligent machines, but also with real relevance to future generations of people living in a world in which the cognitive capacities of machines will be vastly greater than they are now
Gunderson, Keith (1964). The imitation game. Mind 73 (April):234-45.   (Cited by 13 | Annotation | Google | More links)
Harnad, Stevan & Dror, Itiel (2006). Distributed cognition: Cognizing, autonomy and the Turing test. Pragmatics and Cognition 14 (2):14.   (Cited by 2 | Google | More links)
Abstract: Some of the papers in this special issue distribute cognition between what is going on inside individual cognizers' heads and their outside worlds; others distribute cognition among different individual cognizers. Turing's criterion for cognition was individual, autonomous input/output capacity. It is not clear that distributed cognition could pass the Turing Test
Harnad, Stevan (1995). Does mind piggyback on robotic and symbolic capacity? In H. Morowitz & J. Singer (eds.), The Mind, the Brain, and Complex Adaptive Systems. Addison Wesley.   (Google)
Abstract: Cognitive science is a form of "reverse engineering" (as Dennett has dubbed it). We are trying to explain the mind by building (or explaining the functional principles of) systems that have minds. A "Turing" hierarchy of empirical constraints can be applied to this task, from t1, toy models that capture only an arbitrary fragment of our performance capacity, to T2, the standard "pen-pal" Turing Test (total symbolic capacity), to T3, the Total Turing Test (total symbolic plus robotic capacity), to T4 (T3 plus internal [neuromolecular] indistinguishability). All scientific theories are underdetermined by data. What is the right level of empirical constraint for cognitive theory? I will argue that T2 is underconstrained (because of the Symbol Grounding Problem and Searle's Chinese Room Argument) and that T4 is overconstrained (because we don't know what neural data, if any, are relevant). T3 is the level at which we solve the "other minds" problem in everyday life, the one at which evolution operates (the Blind Watchmaker is no mind-reader either) and the one at which symbol systems can be grounded in the robotic capacity to name and manipulate the objects their symbols are about. I will illustrate this with a toy model for an important component of T3 -- categorization -- using neural nets that learn category invariance by "warping" similarity space the way it is warped in human categorical perception: within-category similarities are amplified and between-category similarities are attenuated. This analog "shape" constraint is the grounding inherited by the arbitrarily shaped symbol that names the category and by all the symbol combinations it enters into. No matter how tightly one constrains any such model, however, it will always be more underdetermined than normal scientific and engineering theory. This will remain the ineliminable legacy of the mind/body problem
Harnad, Stevan (1994). Levels of functional equivalence in reverse bioengineering: The Darwinian Turing test for artificial life. Artificial Life 1 (3):93-301.   (Cited by 35 | Google | More links)
Abstract: Both Artificial Life and Artificial Mind are branches of what Dennett has called "reverse engineering": Ordinary engineering attempts to build systems to meet certain functional specifications, reverse bioengineering attempts to understand how systems that have already been built by the Blind Watchmaker work. Computational modelling (virtual life) can capture the formal principles of life, perhaps predict and explain it completely, but it can no more be alive than a virtual forest fire can be hot. In itself, a computational model is just an ungrounded symbol system; no matter how closely it matches the properties of what is being modelled, it matches them only formally, with the mediation of an interpretation. Synthetic life is not open to this objection, but it is still an open question how close a functional equivalence is needed in order to capture life. Close enough to fool the Blind Watchmaker is probably close enough, but would that require molecular indistinguishability, and if so, do we really need to go that far?
Harnad, Stevan (1991). Other bodies, other minds: A machine incarnation of an old philosophical problem. [Journal (Paginated)] 1 (1):43-54.   (Cited by 99 | Annotation | Google | More links)
Abstract: Explaining the mind by building machines with minds runs into the other-minds problem: How can we tell whether any body other than our own has a mind when the only way to know is by being the other body? In practice we all use some form of Turing Test: If it can do everything a body with a mind can do such that we can't tell them apart, we have no basis for doubting it has a mind. But what is "everything" a body with a mind can do? Turing's original "pen-pal" version (the TT) only tested linguistic capacity, but Searle has shown that a mindless symbol-manipulator could pass the TT undetected. The Total Turing Test (TTT) calls for all of our linguistic and robotic capacities; immune to Searle's argument, it suggests how to ground a symbol manipulating system in the capacity to pick out the objects its symbols refer to. No Turing Test, however, can guarantee that a body has a mind. Worse, nothing in the explanation of its successful performance requires a model to have a mind at all. Minds are hence very different from the unobservables of physics (e.g., superstrings); and Turing Testing, though essential for machine-modeling the mind, can really only yield an explanation of the body
Harnad, Stevan (2006). The annotation game: On Turing (1950) on computing, machinery, and intelligence. In Robert Epstein & G. Peters (eds.), [Book Chapter] (in Press). Kluwer.   (Cited by 5 | Google | More links)
Abstract: This quote/commented critique of Turing's classical paper suggests that Turing meant -- or should have meant -- the robotic version of the Turing Test (and not just the email version). Moreover, any dynamic system (that we design and understand) can be a candidate, not just a computational one. Turing also dismisses the other-minds problem and the mind/body problem too quickly. They are at the heart of both the problem he is addressing and the solution he is proposing
Harnad, Stevan (2006). The annotation game: On Turing (1950) on computing, machinery, and intelligence. In Robert Epstein & Grace Peters (eds.), [Book Chapter] (in Press). Kluwer.   (Cited by 5 | Google | More links)
Abstract: This quote/commented critique of Turing's classical paper suggests that Turing meant -- or should have meant -- the robotic version of the Turing Test (and not just the email version). Moreover, any dynamic system (that we design and understand) can be a candidate, not just a computational one. Turing also dismisses the other-minds problem and the mind/body problem too quickly. They are at the heart of both the problem he is addressing and the solution he is proposing
Harnad, Stevan (1999). Turing on reverse-engineering the mind. Journal of Logic, Language, and Information.   (Cited by 4 | Google)
Harnad, Stevan (1992). The Turing test is not a trick: Turing indistinguishability is a scientific criterion. [Journal (Paginated)] 3 (4):9-10.   (Cited by 44 | Google | More links)
Abstract: It is important to understand that the Turing Test (TT) is not, nor was it intended to be, a trick; how well one can fool someone is not a measure of scientific progress. The TT is an empirical criterion: It sets AI's empirical goal to be to generate human-scale performance capacity. This goal will be met when the candidate's performance is totally indistinguishable from a human's. Until then, the TT simply represents what it is that AI must endeavor eventually to accomplish scientifically
Hauser, Larry (2001). Look who's moving the goal posts now. Minds and Machines 11 (1):41-51.   (Cited by 2 | Google | More links)
Abstract:   The abject failure of Turing's first prediction (of computer success in playing the Imitation Game) confirms the aptness of the Imitation Game test as a test of human level intelligence. It especially belies fears that the test is too easy. At the same time, this failure disconfirms expectations that human level artificial intelligence will be forthcoming any time soon. On the other hand, the success of Turing's second prediction (that acknowledgment of computer thought processes would become commonplace) in practice amply confirms the thought that computers think in some manner and are possessed of some level of intelligence already. This lends ever-growing support to the hypothesis that computers will think at a human level eventually, despite the abject failure of Turing's first prediction
Hauser, Larry (1993). Reaping the whirlwind: Reply to Harnad's Other Bodies, Other Minds. Minds and Machines 3 (2):219-37.   (Cited by 18 | Google | More links)
Abstract:   Harnad''s proposed robotic upgrade of Turing''s Test (TT), from a test of linguistic capacity alone to a Total Turing Test (TTT) of linguisticand sensorimotor capacity, conflicts with his claim that no behavioral test provides even probable warrant for attributions of thought because there is no evidence of consciousness besides private experience. Intuitive, scientific, and philosophical considerations Harnad offers in favor of his proposed upgrade are unconvincing. I agree with Harnad that distinguishing real from as if thought on the basis of (presence or lack of) consciousness (thus rejecting Turing (behavioral) testing as sufficient warrant for mental attribution)has the skeptical consequence Harnad accepts — there is in factno evidence for me that anyone else but me has a mind. I disagree with hisacceptance of it! It would be better to give up the neo-Cartesian faith in private conscious experience underlying Harnad''s allegiance to Searle''s controversial Chinese Room Experiment than give up all claim to know others think. It would be better to allow that (passing) Turing''s Test evidences — evenstrongly evidences — thought
Hayes, Patrick & Ford, Kenneth M. (1995). Turing test considered harmful. Proceedings of the Fourteenth International Joint Conference on Artificial Intelligence 1:972-77.   (Cited by 26 | Google)
Hernandez-Orallo, Jose (2000). Beyond the Turing test. Journal of Logic, Language and Information 9 (4):447-466.   (Cited by 2 | Google | More links)
Abstract: The main factor of intelligence is defined as the ability tocomprehend, formalising this ability with the help of new constructsbased on descriptional complexity. The result is a comprehension test,or C-test, which is exclusively defined in computational terms. Due toits absolute and non-anthropomorphic character, it is equally applicableto both humans and non-humans. Moreover, it correlates with classicalpsychometric tests, thus establishing the first firm connection betweeninformation theoretical notions and traditional IQ tests. The TuringTest is compared with the C-test and the combination of the two isquestioned. In consequence, the idea of using the Turing Test as apractical test of intelligence should be surpassed, and substituted bycomputational and factorial tests of different cognitive abilities, amuch more useful approach for artificial intelligence progress and formany other intriguing questions that present themselves beyond theTuring Test
Hofstadter, Douglas R. (1981). A coffee-house conversation on the Turing test. Scientific American.   (Annotation | Google)
Jacquette, Dale (1993). A Turing test conversation. Philosophy 68 (264):231-33.   (Cited by 4 | Google)
Jacquette, Dale (1993). Who's afraid of the Turing test? Behavior and Philosophy 20 (21):63-74.   (Annotation | Google)
Karelis, Charles (1986). Reflections on the Turing test. Journal for the Theory of Social Behavior 16 (July):161-72.   (Cited by 10 | Google | More links)
Klsadlkjfs, Addssdf (ms). mind thing.   (Google)
Lee, E. T. (1996). On the Turing test for artificial intelligence. Kybernetes 25.   (Cited by 1 | Google)
Leiber, Justin (1995). On Turing's Turing test and why the matter matters. Synthese 104 (1):59-69.   (Cited by 6 | Annotation | Google)
Leiber, Justin (1989). Shanon on the Turing test. Journal of Social Behavior 19 (June):257-259.   (Cited by 6 | Google | More links)
Leiber, Justin (2001). Turing and the fragility and insubstantiality of evolutionary explanations: A puzzle about the unity of Alan Turing's work with some larger implications. Philosophical Psychology 14 (1):83-94.   (Google | More links)
Abstract: As is well known, Alan Turing drew a line, embodied in the "Turing test," between intellectual and physical abilities, and hence between cognitive and natural sciences. Less familiarly, he proposed that one way to produce a "passer" would be to educate a "child machine," equating the experimenter's improvements in the initial structure of the child machine with genetic mutations, while supposing that the experimenter might achieve improvements more expeditiously than natural selection. On the other hand, in his foundational "On the chemical basis of morphogenesis," Turing insisted that biological explanation clearly confine itself to purely physical and chemical means, eschewing vitalist and teleological talk entirely and hewing to D'Arcy Thompson's line that "evolutionary 'explanations,'" are historical and narrative in character, employing the same intentional and teleological vocabulary we use in doing human history, and hence, while perhaps on occasion of heuristic value, are not part of biology as a natural science. To apply Turing's program to recent issues, the attempt to give foundations to the social and cognitive sciences in the "real science" of evolutionary biology (as opposed to Turing's biology) is neither to give foundations, nor to achieve the unification of the social/cognitive sciences and the natural sciences
Leiber, Justin (2006). Turing's golden: How well Turing's work stands today. Philosophical Psychology 19 (1):13-46.   (Google | More links)
Abstract: A. M. Turing has bequeathed us a conceptulary including 'Turing, or Turing-Church, thesis', 'Turing machine', 'universal Turing machine', 'Turing test' and 'Turing structures', plus other unnamed achievements. These include a proof that any formal language adequate to express arithmetic contains undecidable formulas, as well as achievements in computer science, artificial intelligence, mathematics, biology, and cognitive science. Here it is argued that these achievements hang together and have prospered well in the 50 years since Turing's death
Lockhart, Robert S. (2000). Modularity, cognitive penetrability and the Turing test. Psycoloquy.   (Cited by 1 | Google | More links)
Abstract: The Turing Test blurs the distinction between a model and irrelevant) instantiation details. Modeling only functional modules is problematic if these are interconnected and cognitively penetrable
Mays, W. (1952). Can machines think? Philosophy 27 (April):148-62.   (Cited by 7 | Google)
Michie, Donald (1993). Turing's test and conscious thought. Artificial Intelligence 60:1-22.   (Cited by 19 | Google)
Midgley, Mary (1995). Zombies and the Turing test. Journal of Consciousness Studies 2 (4):351-352.   (Google)
Millar, P. (1973). On the point of the imitation game. Mind 82 (October):595-97.   (Cited by 9 | Google | More links)
Mitchell, Robert W. & Anderson, James R. (1998). Primate theory of mind is a Turing test. Behavioral and Brain Sciences 21 (1):127-128.   (Google)
Abstract: Heyes's literature review of deception, imitation, and self-recognition is inadequate, misleading, and erroneous. The anaesthetic artifact hypothesis of self-recognition is unsupported by the data she herself examines. Her proposed experiment is tantalizing, indicating that theory of mind is simply a Turing test
Moor, James H. (1976). An analysis of Turing's test. Philosophical Studies 30:249-257.   (Annotation | Google)
Moor, James H. (1976). An analysis of the Turing test. Philosophical Studies 30 (4).   (Google)
Moor, James H. (1978). Explaining computer behavior. Philosophical Studies 34 (October):325-7.   (Cited by 9 | Annotation | Google | More links)
Moor, James H. (2001). The status and future of the Turing test. Minds and Machines 11 (1):77-93.   (Cited by 9 | Google | More links)
Abstract:   The standard interpretation of the imitation game is defended over the rival gender interpretation though it is noted that Turing himself proposed several variations of his imitation game. The Turing test is then justified as an inductive test not as an operational definition as commonly suggested. Turing's famous prediction about his test being passed at the 70% level is disconfirmed by the results of the Loebner 2000 contest and the absence of any serious Turing test competitors from AI on the horizon. But, reports of the death of the Turing test and AI are premature. AI continues to flourish and the test continues to play an important philosophical role in AI. Intelligence attribution, methodological, and visionary arguments are given in defense of a continuing role for the Turing test. With regard to Turing's predictions one is disconfirmed, one is confirmed, but another is still outstanding
Nichols, Shaun & Stich, Stephen P. (1994). Folk psychology. Encyclopedia of Cognitive Science.   (Cited by 2 | Google | More links)
Abstract: For the last 25 years discussions and debates about commonsense psychology (or “folk psychology,” as it is often called) have been center stage in the philosophy of mind. There have been heated disagreements both about what folk psychology is and about how it is related to the scientific understanding of the mind/brain that is emerging in psychology and the neurosciences. In this chapter we will begin by explaining why folk psychology plays such an important role in the philosophy of mind. Doing that will require a quick look at a bit of the history of philosophical discussions about the mind. We’ll then turn our attention to the lively contemporary discussions aimed at clarifying the philosophical role that folk psychology is expected to play and at using findings in the cognitive sciences to get a clearer understanding of the exact nature of folk psychology
Oppy, Graham & Dowe, D. (online). The Turing test. Stanford Encyclopedia of Philosophy.   (Cited by 3 | Google)
Piccinini, Gualtiero (2000). Turing's rules for the imitation game. Minds and Machines 10 (4):573-582.   (Cited by 10 | Google | More links)
Abstract:   In the 1950s, Alan Turing proposed his influential test for machine intelligence, which involved a teletyped dialogue between a human player, a machine, and an interrogator. Two readings of Turing''s rules for the test have been given. According to the standard reading of Turing''s words, the goal of the interrogator was to discover which was the human being and which was the machine, while the goal of the machine was to be indistinguishable from a human being. According to the literal reading, the goal of the machine was to simulate a man imitating a woman, while the interrogator – unaware of the real purpose of the test – was attempting to determine which of the two contestants was the woman and which was the man. The present work offers a study of Turing''s rules for the test in the context of his advocated purpose and his other texts. The conclusion is that there are several independent and mutually reinforcing lines of evidence that support the standard reading, while fitting the literal reading in Turing''s work faces severe interpretative difficulties. So, the controversy over Turing''s rules should be settled in favor of the standard reading
Purthill, R. (1971). Beating the imitation game. Mind 80 (April):290-94.   (Google | More links)
Rankin, Terry L. (1987). The Turing paradigm: A critical assessment. Dialogue 29 (April):50-55.   (Cited by 3 | Annotation | Google)
Rapaport, William J. (2000). How to pass a Turing test: Syntactic semantics, natural-language understanding, and first-person cognition. Journal of Logic, Language, and Information 9 (4):467-490.   (Cited by 15 | Google | More links)
Rapaport, William J. (2000). How to pass a Turing test. Journal of Logic, Language and Information 9 (4).   (Google)
Abstract: I advocate a theory of syntactic semantics as a way of understanding how computers can think (and how the Chinese-Room-Argument objection to the Turing Test can be overcome): (1) Semantics, considered as the study of relations between symbols and meanings, can be turned into syntax – a study of relations among symbols (including meanings) – and hence syntax (i.e., symbol manipulation) can suffice for the semantical enterprise (contra Searle). (2) Semantics, considered as the process of understanding one domain (by modeling it) in terms of another, can be viewed recursively: The base case of semantic understanding –understanding a domain in terms of itself – is syntactic understanding. (3) An internal (or narrow), first-person point of view makes an external (or wide), third-person point of view otiose for purposes of understanding cognition
Rapaport, William J. (online). Review of The Turing Test: Verbal Behavior As the Hallmark of Intelligence.   (Google | More links)
Abstract: Stuart M. Shieber’s name is well known to computational linguists for his research and to computer scientists more generally for his debate on the Loebner Turing Test competition, which appeared a decade earlier in Communications of the ACM (Shieber 1994a, 1994b; Loebner 1994).1 With this collection, I expect it to become equally well known to philosophers
Ravenscroft, Ian (online). Folk psychology as a theory. Stanford Encyclopedia of Philosophy.   (Cited by 9 | Google | More links)
Abstract: Many philosophers and cognitive scientists claim that our everyday or "folk" understanding of mental states constitutes a theory of mind. That theory is widely called "folk psychology" (sometimes "commonsense" psychology). The terms in which folk psychology is couched are the familiar ones of "belief" and "desire", "hunger", "pain" and so forth. According to many theorists, folk psychology plays a central role in our capacity to predict and explain the behavior of ourselves and others. However, the nature and status of folk psychology remains controversial
Rhodes, Kris (ms). Vindication of the Rights of Machine.   (Google | More links)
Abstract: In this paper, I argue that certain Machines can have rights independently of whether they are sentient, or conscious, or whatever you might call it.
Richardson, Robert C. (1982). Turing tests for intelligence: Ned Block's defense of psychologism. Philosophical Studies 41 (May):421-6.   (Cited by 4 | Annotation | Google | More links)
Rosenberg, Jay F. (1982). Conversation and intelligence. In B. de Gelder (ed.), Knowledge and Representation. Routledge & Kegan Paul.   (Google)
Sampson, Geoffrey (1973). In defence of Turing. Mind 82 (October):592-94.   (Cited by 5 | Google | More links)
Sato, Y. & Ikegami, T. (2004). Undecidability in the imitation game. Minds and Machines 14 (2):133-43.   (Cited by 6 | Google | More links)
Abstract:   This paper considers undecidability in the imitation game, the so-called Turing Test. In the Turing Test, a human, a machine, and an interrogator are the players of the game. In our model of the Turing Test, the machine and the interrogator are formalized as Turing machines, allowing us to derive several impossibility results concerning the capabilities of the interrogator. The key issue is that the validity of the Turing test is not attributed to the capability of human or machine, but rather to the capability of the interrogator. In particular, it is shown that no Turing machine can be a perfect interrogator. We also discuss meta-imitation game and imitation game with analog interfaces where both the imitator and the interrogator are mimicked by continuous dynamical systems
Saygin, Ayse P.; Cicekli, Ilyas & Akman, Varol (2000). Turing test: 50 years later. Minds and Machines 10 (4):463-518.   (Cited by 45 | Google | More links)
Abstract:   The Turing Test is one of the most disputed topics in artificial intelligence, philosophy of mind, and cognitive science. This paper is a review of the past 50 years of the Turing Test. Philosophical debates, practical developments and repercussions in related disciplines are all covered. We discuss Turing''s ideas in detail and present the important comments that have been made on them. Within this context, behaviorism, consciousness, the `other minds'' problem, and similar topics in philosophy of mind are discussed. We also cover the sociological and psychological aspects of the Turing Test. Finally, we look at the current situation and analyze programs that have been developed with the aim of passing the Turing Test. We conclude that the Turing Test has been, and will continue to be, an influential and controversial topic
Saygin, A. P. & Cicekli, I. (2000). Turing test: 50 years later. Minds and Machines 10 (4):463-518.   (Cited by 44 | Google | More links)
Abstract: The Turing Test is one of the most disputed topics in artificial intelligence, philosophy of mind, and cognitive science. This paper is a review of the past 50 years of the Turing Test. Philo- sophical debates, practical developments and repercussions in related disciplines are all covered. We discuss Turing’s ideas in detail and present the important comments that have been made on them. Within this context, behaviorism, consciousness, the ‘other minds’ problem, and similar topics in philosophy of mind are discussed. We also cover the sociological and psychological aspects of the Turing Test. Finally, we look at the current situation and analyze programs that have been developed with the aim of passing the Turing Test. We conclude that the Turing Test has been, and will continue to be, an influential and controversial topic
Schweizer, Paul (1998). The truly total Turing test. Minds and Machines 8 (2):263-272.   (Cited by 9 | Google | More links)
Abstract:   The paper examines the nature of the behavioral evidence underlying attributions of intelligence in the case of human beings, and how this might be extended to other kinds of cognitive system, in the spirit of the original Turing Test (TT). I consider Harnad's Total Turing Test (TTT), which involves successful performance of both linguistic and robotic behavior, and which is often thought to incorporate the very same range of empirical data that is available in the human case. However, I argue that the TTT is still too weak, because it only tests the capabilities of particular tokens within a preexisting context of intelligent behavior. What is needed is a test of the cognitive type, as manifested through a number of exemplary tokens, in order to confirm that the cognitive type is able to produce the context of intelligent behavior presupposed by tests such as the TT and TTT
Sennett, James F. (ms). The ice man cometh: Lt. comander data and the Turing test.   (Google)
Shanon, Benny (1989). A simple comment regarding the Turing test. Journal for the Theory of Social Behavior 19 (June):249-56.   (Cited by 8 | Annotation | Google | More links)
Shah, Huma & Warwick, Kevin (forthcoming). From the Buzzing in Turing’s Head to Machine Intelligence Contests. TCIT 2010.   (Google)
Abstract: This paper presents an analysis of three major contests for machine intelligence. We conclude that a new era for Turing’s test requires a fillip in the guise of a committed sponsor, not unlike DARPA, funders of the successful 2007 Urban Challenge.
Shieber, Stuart M. (1994). Lessons from a restricted Turing test. Communications of the Association for Computing Machinery 37:70-82.   (Cited by 55 | Google | More links)
Shieber, Stuart M. (ed.) (2004). The Turing Test: Verbal Behavior As the Hallmark of Intelligence. MIT Press.   (Cited by 12 | Google | More links)
Abstract: Stuart M. Shieber’s name is well known to computational linguists for his research and to computer scientists more generally for his debate on the Loebner Turing Test competition, which appeared a decade earlier in Communications of the ACM (Shieber 1994a, 1994b; Loebner 1994).1 With this collection, I expect it to become equally well known to philosophers
Shieber, Stuart M. (2007). The Turing test as interactive proof. Noûs 41 (4):686–713.   (Google | More links)
Stalker, Douglas F. (1978). Why machines can't think: A reply to James Moor. Philosophical Studies 34 (3):317-20.   (Cited by 12 | Annotation | Google | More links)
Sterrett, Susan G. (2002). Nested algorithms and the original imitation game test: A reply to James Moor. Minds and Machines 12 (1):131-136.   (Cited by 2 | Google | More links)
Stevenson, John G. (1976). On the imitation game. Philosophia 6 (March):131-33.   (Cited by 4 | Google | More links)
Sterrett, Susan G. (2000). Turing's two tests for intelligence. Minds and Machines 10 (4):541-559.   (Cited by 10 | Google | More links)
Abstract:   On a literal reading of `Computing Machinery and Intelligence'', Alan Turing presented not one, but two, practical tests to replace the question `Can machines think?'' He presented them as equivalent. I show here that the first test described in that much-discussed paper is in fact not equivalent to the second one, which has since become known as `the Turing Test''. The two tests can yield different results; it is the first, neglected test that provides the more appropriate indication of intelligence. This is because the features of intelligence upon which it relies are resourcefulness and a critical attitude to one''s habitual responses; thus the test''s applicablity is not restricted to any particular species, nor does it presume any particular capacities. This is more appropriate because the question under consideration is what would count as machine intelligence. The first test realizes a possibility that philosophers have overlooked: a test that uses a human''s linguistic performance in setting an empirical test of intelligence, but does not make behavioral similarity to that performance the criterion of intelligence. Consequently, the first test is immune to many of the philosophical criticisms on the basis of which the (so-called) `Turing Test'' has been dismissed
Stoica, Cristi, Turing test, easy to pass; human mind, hard to understand.   (Google)
Abstract: Under general assumptions, the Turing test can be easily passed by an appropriate algorithm. I show that for any test satisfying several general conditions, we can construct an algorithm that can pass that test, hence, any operational definition is easy to fulfill. I suggest a test complementary to Turing's test, which will measure our understanding of the human mind. The Turing test is required to fix the operational specifications of the algorithm under test; under this constrain, the additional test simply consists in measuring the length of the algorithm
Traiger, Saul (2000). Making the right identification in the Turing test. Minds and Machines 10 (4):561-572.   (Cited by 7 | Google | More links)
Abstract:   The test Turing proposed for machine intelligence is usually understood to be a test of whether a computer can fool a human into thinking that the computer is a human. This standard interpretation is rejected in favor of a test based on the Imitation Game introduced by Turing at the beginning of "Computing Machinery and Intelligence."
Turney, Peter (ms). Answering subcognitive Turing test questions: A reply to French.   (Cited by 5 | Google | More links)
Abstract: Robert French has argued that a disembodied computer is incapable of passing a Turing Test that includes subcognitive questions. Subcognitive questions are designed to probe the network of cultural and perceptual associations that humans naturally develop as we live, embodied and embedded in the world. In this paper, I show how it is possible for a disembodied computer to answer subcognitive questions appropriately, contrary to French’s claim. My approach to answering subcognitive questions is to use statistical information extracted from a very large collection of text. In particular, I show how it is possible to answer a sample of subcognitive questions taken from French, by issuing queries to a search engine that indexes about 350 million Web pages. This simple algorithm may shed light on the nature of human (sub-) cognition, but the scope of this paper is limited to demonstrating that French is mistaken: a disembodied computer can answer subcognitive questions
Turing, Alan M. (1950). Computing machinery and intelligence. Mind 59 (October):433-60.   (Cited by 9 | Annotation | Google | More links)
Abstract: I propose to consider the question, "Can machines think?" This should begin with definitions of the meaning of the terms "machine" and "think." The definitions might be framed so as to reflect so far as possible the normal use of the words, but this attitude is dangerous, If the meaning of the words "machine" and "think" are to be found by examining how they are commonly used it is difficult to escape the conclusion that the meaning and the answer to the question, "Can machines think?" is to be sought in a statistical survey such as a Gallup poll. But this is absurd. Instead of attempting such a definition I shall replace the question by another, which is closely related to it and is expressed in relatively unambiguous words. The new form of the problem can be described in terms of a game which we call the 'imitation game." It is played with three people, a man (A), a woman (B), and an interrogator (C) who may be of either sex. The interrogator stays in a room apart front the other two. The object of the game for the interrogator is to determine which of the other two is the man and which is the woman. He knows them by labels X and Y, and at the end of the game he says either "X is A and Y is B" or "X is B and Y is A." The interrogator is allowed to put questions to A and B. We now ask the question, "What will happen when a machine takes the part of A in this game?" Will the interrogator decide wrongly as often when the game is played like this as he does when the game is played between a man and a woman? These questions replace our original, "Can machines think?"
Vergauwen, Roger & González, Rodrigo (2005). On the verisimilitude of artificial intelligence. Logique Et Analyse- 190 (189):323-350.   (Google)
Ward, Andrew (1989). Radical interpretation and the Gunderson game. Dialectica 43 (3):271-280.   (Google)
Watt, S. (1996). Naive psychology and the inverted Turing test. Psycoloquy 7 (14).   (Cited by 19 | Google | More links)
Abstract: This target article argues that the Turing test implicitly rests on a "naive psychology," a naturally evolved psychological faculty which is used to predict and understand the behaviour of others in complex societies. This natural faculty is an important and implicit bias in the observer's tendency to ascribe mentality to the system in the test. The paper analyses the effects of this naive psychology on the Turing test, both from the side of the system and the side of the observer, and then proposes and justifies an inverted version of the test which allows the processes of ascription to be analysed more directly than in the standard version
Waterman, C. (1995). The Turing test and the argument from analogy for other minds. Southwest Philosophy Review 11 (1):15-22.   (Google)
Whitby, Blay (1996). The Turing test: Ai's biggest blind Alley? In Peter Millican & A. Clark (eds.), Machines and Thought. Oxford University Press.   (Cited by 13 | Google)
Whitby, Blay (1996). Why the Turing test is ai's biggest blind Alley. In Peter Millican & A. Clark (eds.), Machines and Thought, The Legacy of Alan Turing. Oup.   (Google)
Zdenek, Sean (2001). Passing loebner's Turing test: A case of conflicting discourse functions. Minds and Machines 11 (1):53-76.   (Cited by 8 | Google | More links)
Abstract:   This paper argues that the Turing test is based on a fixed and de-contextualized view of communicative competence. According to this view, a machine that passes the test will be able to communicate effectively in a variety of other situations. But the de-contextualized view ignores the relationship between language and social context, or, to put it another way, the extent to which speakers respond dynamically to variations in discourse function, formality level, social distance/solidarity among participants, and participants' relative degrees of power and status (Holmes, 1992). In the case of the Loebner Contest, a present day version of the Turing test, the social context of interaction can be interpreted in conflicting ways. For example, Loebner discourse is defined 1) as a friendly, casual conversation between two strangers of equal power, and 2) as a one-way transaction in which judges control the conversational floor in an attempt to expose contestants that are not human. This conflict in discourse function is irrelevant so long as the goal of the contest is to ensure that only thinking, human entities pass the test. But if the function of Loebner discourse is to encourage the production of software that can pass for human on the level of conversational ability, then the contest designers need to resolve this ambiguity in discourse function, and thus also come to terms with the kind of competence they are trying to measure