Compiled by David Chalmers, Philosophy, Australian National University. Technical support by David Bourget, University of Toronto. For more information see the main page.
Turing, A. 1950. Computing machinery and intelligence. Mind 59:433-60. (Cited by 994 | Google)
Proposes the Imitation game (Turing test) as a test for intelligence: If a machine can't be told apart from a human in a conversation over a teletype, then that's good enough. With responses to various objections.Alper, G. 1990. A psychoanalyst takes the Turing test. Psychoanalytic Review 77:59-68. (Cited by 2 | Google)
Barresi, J. 1987. Prospects for the Cyberiad: Certain limits on human self-knowledge in the cybernetic age. Journal for the Theory of Social Behavior 17:19-46. (Cited by 3 | Google)
Block, N. 1981. Psychologism and behaviorism. Philosophical Review 90:5-43. (Cited by 46 | Google)
A look-up table could pass the Turing test, and surely isn't intelligent. The TT errs in testing behavior and not mechanisms. A nice, thorough paper.Bringsjord, S. 2001. Creativity, the Turing test, and the (better) Lovelace test. Minds & Machines 11:3-27. (Cited by 6 | Google)
Bringsjord, S. , Bello, P. , & Ferrucci, D. 2001. Creativity, the Turing test, and the (better) Lovelace test. Minds and Machines 11:3-27. (Cited by 6 | Google)
Clark, T. 1992. The Turing test as a novel form of hermeneutics. International Studies in Philosophy 24:17-31. (Cited by 1 | Google)
Copeland, B. J. 2000. The Turing test. Minds and Machines 10:519-539. (Cited by 3 | Google)
Crawford, C. 1994. Notes on the Turing test. Communications of the Association for Computing Machinery 37:13-15. (Google)
Crockett, L. 1994. The Turing Test and the Frame Problem: AI's Mistaken Understanding of Intelligence. Ablex. (Cited by 14 | Google)
Davidson, D. 1990. Turing's test. In (K. Said, ed) Modelling the Mind. Oxford University Press. (Google)
Dennett, D. C. 1984. Can machines think? In (M. Shafto, ed) How We Know. Harper & Row. (Cited by 21 | Google)
Defending the Turing test as a good test for intelligence.Drozdek, A. 2001. Descartes' Turing test. Epistemologia 24:5-29. (Google)
Erion, G. J. 2001. The Cartesian test for automatism. Minds and Machines 1:29-39. (Cited by 4 | Google)
French, R. M. 1990. Subcognition and the limits of the Turing test. Mind 99:53-66. (Cited by 48 | Google)
The Turing Test is too hard, as it requires not intelligence but human intelligence. Any machine could be unmasked through careful questioning, but this wouldn't mean that the machine was unintelligent.French, R. M. 1995. Refocusing the debate on the Turing Test: A response. Behavior and Philosophy 23:59-60. (Cited by 4 | Google)
Response to Jacquette 1993.Gunderson, K. 1964. The imitation game. Mind 73:234-45. (Cited by 7 | Google)
The Turing test is not broad enough: there's much more to thought than the ability to play the imitation game.Harnad, S. 1991. Other bodies, other minds: A machine incarnation of an old philosophical problem. Minds and Machines 1:43-54. (Cited by 79 | Google)
On the Total Turing Test (full behavioral equivalence) as a test for mind.Harnad, S. 1994. Levels of functional equivalence in reverse bioengineering: The Darwinian Turing test for artificial life. Artificial Life 1(3). (Cited by 22 | Google)
Harnad, S. 1999. Turing on reverse-engineering the mind. Journal of Logic, Language, and Information. (Cited by 3 | Google)
Hauser, L. 1993. Reaping the whirlwind: Reply to Harnad's "Other bodies, other minds". Minds and Machines 3:219-37. (Cited by 13 | Google)
Hauser, L. 2001. Look who's moving the goal posts now. Minds and Machines 11:41-51. (Google)
Hayes, P. & Ford, K. 1995. Turing test considered harmful. Proceedings of the Fourteenth International Joint Conference on Artificial Intelligence 1:972-77. (Cited by 21 | Google)
Hofstadter, D. R. 1981. A coffee-house conversation on the Turing test. Scientific American. (Google)
A dialogue on the Turing test.Jacquette, D. 1993. Who's afraid of the Turing test? Behavior and Philosophy 20:63-74. (Google)
Defending the Turing test against French 1990. Turing did not intend the test to provide a *necessary* condition for intelligence.Jacquette, D. 1993. A Turing test conversation. Philosophy 68:231-33. (Cited by 1 | Google)
Karelis, C. 1986. Reflections on the Turing test. Journal for the Theory of Social Behavior 16:161-72. (Cited by 3 | Google)
Kugel, P. 2002. Computing machines can't be intelligent (...and Turing said so). Minds and Machines 12:563-579. (Google)
Lee, E. T. 1996. On the Turing test for artificial intelligence. Kybernetes 25:61. (Google)
Leiber, J. 1989. Shanon on the Turing test. Journal of Social Behavior. (Cited by 2 | Google)
Leiber, J. 1995. On Turing's Turing Test and why the matter matters. Synthese 104:59-69. (Google)
Turing's test is neutral about the structure of the machine that passes it, but it must be practical and reliable (thus excluding Searle's and Block's counterexamples).Leiber, J. 2001. Turing and the fragility and insubstantiality of evolutionary explanations: A puzzle about the unity of Alan Turing's work with some larger implications. Philosophical Psychology 14:83-94. (Cited by 1 | Google)
Mays, W. 1952. Can machines think? Philosophy 27:148-62. (Cited by 2 | Google)
Michie, D. 1993. Turing's test and conscious thought. Artificial Intelligence 60:1-22. Reprinted in (P. Millican & A. Clark, eds) Machines and Thought. Oxford University Press. (Google)
Millar, P. 1973. On the point of the Imitation Game. Mind 82:595-97. (Cited by 3 | Google)
Moor, J. H. 1976. An analysis of Turing's test. Philosophical Studies 30:249-257. (Google)
The basis of the Turing test is not an operational definition of thinking, but rather an inference to the best explanation.Moor, J. H. 1978. Explaining computer behavior. Philosophical Studies 34:325-7. (Cited by 5 | Google)
Reply to Stalker 1978: Mechanistic and mentalistic explanations are no more incompatible than program-based and physical explanations.Moor, J. H. 2001. The status and future of the Turing test. Minds and Machines 11:77-93. (Cited by 4 | Google)
Oppy, G. & Dowe, D. 2003. The Turing test. Stanford Encyclopedia of Philosophy. (Cited by 2 | Google)
Piccinini, G. 2000. Turing's rules for the imitation game. Minds and Machines 10:573-582. (Google)
Purthill, R. 1971. Beating the imitation game. Mind 80:290-94. (Google)
Rankin, T. L. 1987. The Turing paradigm: A critical assessment. Dialogue 29:50-55. (Cited by 1 | Google)
Some obscure remarks on lying, imitation, and the Turing test.Richardson, R. C. 1982. Turing tests for intelligence: Ned Block's defense of psychologism. Philosophical Studies 41:421-6. (Cited by 2 | Google)
A weak argument against Block: input/output function doesn't guarantee a capacity to respond sensibly.Rosenberg, J. 1982. Conversation and intelligence. In (B. de Gelder, ed) Knowledge and Representation. Routledge & Kegan Paul. (Google)
Sampson, G. 1973. In defence of Turing. Mind 82:592-94. (Cited by 2 | Google)
Sato, Y. & Ikegami, T. 2004. Undecidability in the imitation game. Minds and Machines 14:133-43. (Cited by 5 | Google)
Saygin, A. P. , Cicekli, I. & Akman V. 2000. Turing test: 50 years later. Minds and Machines 10:463-518. (Cited by 22 | Google)
Schweizer, P. 1998. The truly total Turing Test. Minds and Machines 8:263-272. (Cited by 5 | Google)
Shieber, S. (ed) 2004. The Turing Test: Verbal Behavior as the Hallmark of Intelligence. MIT Press. (Cited by 2 | Google)
Shanon, B. 1989. A simple comment regarding the Turing test. Journal for the Theory of Social Behavior 19:249-56. (Cited by 3 | Google)
The Turing test presupposes a representational/computational framework for cognition. Not all phenomena can be captured in teletype communication.Shieber, S. M. 1994. Lessons from a restricted Turing test. Communications of the Association for Computing Machinery 37:70-82. (Cited by 41 | Google)
Stalker, D. F. 1978. Why machines can't think: A reply to James Moor. Philosophical Studies 34:317-20. (Cited by 6 | Google)
Contra Moor 1976: The best explanation of computer behavior is mechanistic, not mentalistic.Sterrett, S. G. 2000. Turing's two tests for intelligence. Minds and Machines 10:541-559. (Google)
Stevenson, J. G. 1976. On the imitation game. Philosophia 6:131-33. (Cited by 2 | Google)
Traiger, S. 2000. Making the right identification in the Turing test. Minds and Machines 10:561-572. (Cited by 4 | Google)
Waterman, C. 1995. The Turing test and the argument from analogy for other minds. Southwest Philosophy Review 11:15-22. (Google)
Watt, S. 1996. Naive psychology and the inverted Turing test. Psycoloquy 7(14). (Cited by 18 | Google)
Whitby, B. 1996. The Turing test: AI's biggest blind alley? In (P. Millican & A. Clark, eds) Machines and Thought. Oxford University Press. (Cited by 7 | Google)
Zdenek, S. 2001. Passing Loebner's Turing test: A case of conflicting discourse functions. Minds & Machines 11:53-76. (Google)
Benacerraf, P. 1967. God, the Devil, and Godel. Monist 51:9-32. (Cited by 9 | Google)
Discusses and sharpens Lucas's arguments. Argues that the real consequence is that if we are Turing machines, we can't know which.Bowie, G. 1982. Lucas' number is finally up. Journal of Philosophy Logic, 11:279-85. (Cited by 5 | Google)
Lucas's very Godelization procedure makes him inconsistent, unless he has an independent way to see if any TM is consistent, which he doesn't.Boyer, D. 1983. J. R. Lucas, Kurt Godel, and Fred Astaire. Philosophical Quarterly 33:147-59.
Remarks on the various ways in which Lucas and a machine might be said to "prove" anything, and the ways in which a machine might simulate Lucas. The argument has all sorts of level confusions, and a bit of circularity.Chari, C. 1963. Further comments on minds, machines and Godel. Philosophy 38:175-8. (Google)
Can't reduce the lawless creative process to computation.Chalmers, D. J. 1996. Minds, machines, and mathematics. Psyche 2:11-20. (Cited by 14 | Google)
Chihara, C. 1972. On alleged refutations of mechanism using Godel's incompleteness results. Journal of Philosophy 64:507-26. (Google)
An analysis of the Lucas/Benacerraf argument. On various senses in which a machine might come to know its own program.Coder, D. 1969. Godel's theorem and mechanism. Philosophy 44:234-7. (Google)
Only mathematicians understand Godel, so Lucas's argument isn't general; and Turing machines can go wrong. Weak.Dennett, D. C. 1978. The abilities of men and machines. In Brainstorms. MIT Press. (Cited by 1 | Google)
There is no unique TM which we are -- there could be many.Edis, T. 1998. How Godel's theorem supports the possibility of machine intelligence. Minds and Machines 8:251-262. (Google)
Feferman, S. 1996. Penrose's Godelian argument. Psyche 2:21-32. (Google)
Gaifman, H. 2000. What Godel's incompleteness result does and does not show. Journal of Philosophy 97:462-471. (Google)
George, F. 1962. Minds, machines and Godel: Another reply to Mr. Lucas. Philosophy 37:62-63. (Google)
Lucas's argument applies only to deductive machines, not inductive ones.George, A. & Velleman, D. J. 2000. Leveling the playing field between mind and machine: A reply to McCall. Journal of Philosophy 97:456-452. (Google)
Good, I. J. 1967. Human and machine logic. British Journal for the Philosophy of Science 18:145-6. (Cited by 2 | Google)
Even humans can't Godelize forever. On ordinals and transfinite counting.Good, I. J. 1969. Godel's theorem is a red herring. British Journal for the Philosophy of Science 19:357-8. (Google)
Rejoinder to Lucas 1967: the role of consistency; non-constructible ordinals.Grush, R. & Churchland, P. 1995. Gaps in Penrose's toiling. In (T. Metzinger, ed) Conscious Experience. Ferdinand Schoningh. (Google)
Hanson, W. 1971. Mechanism and Godel's theorem. British Journal for the Philosophy of Science 22:9-16. (Google)
An analysis of Benacerraf 1967. Benacerraf's "paradox" is illusory; there are no strong consequences of Godel's theorem for mechanism.Hofstadter, D. R. 1979. Godel, Escher, Bach: An Eternal Golden Braid. Basic Books. (Cited by 588 | Google)
Contra Lucas: we can't Godelize forever; and we're not formal on top level.Hutton, A. 1976. This Godel is killing me. Philosophia 3:135-44. (Google)
Gives a statistical argument to the effect that we cannot know that we are consistent; so the Lucas argument cannot go through.Irvine, A. D. 1983. Lucas, Lewis, and mechanism -- one more time. Analysis 43:94-98. (Google)
Contra Lewis 1979, Lucas can derive the consistency of M even without the premise that he is M. Hmm.Hadley, R. F. 1987. Godel, Lucas, and mechanical models of mind. Computational Intelligence 3:57-63. (Google)
A nice analysis of Lucas's argument and the circumstances under which a machine might prove another's Godel sentences. There's no reason to believe that machines and humans are different here.Jacquette, D. 1987. Metamathematical criteria for minds and machines. Erkenntnis 27:1-16. (Cited by 2 | Google)
A machine will fail a Turing test if it's asked about Godel sentences.King, D. 1996. Is the human mind a Turing machine? Synthese 108:379-89. (Google)
Kirk, R. 1986. Mental machinery and Godel. Synthese. (Cited by 3 | Google)
Lucas's argument fails, as theorems by humans don't correspond to outputs of their formal systems.Lewis, D. 1969. Lucas against mechanism. Philosophy 44:231-3. (Cited by 2 | Google)
Lucas needs a rule of inference from sentences to their consistency, yielding Lucas arithmetic. No machine can prove all of Lucas arithmetic, but there's no reason to suppose humans can either, as the rule is infinitary.Lewis, D. 1979. Lucas against mechanism II. Canadian Journal of Philosophy 9:373-6. (Cited by 1 | Google)
Reply to Lucas 1970: the dialectical argument fails, as the human's output depends on the premise that it is the machine (to derive M's consistency). With a similar premise, the machine itself can do equally well.Lucas, J. R. 1961. Minds, machines and Godel. Philosophy 36:112-127. (Cited by 34 | Google)
Humans can Godelize any given machine, so we're not a machine.Lucas, J. R. 1967. Human and machine logic: a rejoinder. British Journal for the Philosophy of Science 19:155-6. (Google)
Reply to Good 1967: a human can trump any given machine, so the human is not the machine, whether or not the human is superior across the board.Lucas, J. R. 1968. Satan stultified: A rejoinder to Paul Benacerraf. Monist 52:145-58. (Cited by 4 | Google)
Benacerraf 1967 is empty and omega-inconsistent. Reply to arguments based on difficulty of seeing consistency (e.g. Putnam). Fallacious but engaging.Lucas, J. R. 1971. Metamathematics and the philosophy of mind: A rejoinder. Philosophy of Science 38:310-13. (Google)
Lucas, J. R. 1970. Mechanism: A rejoinder. Philosophy 45:149-51. (Google)
Response to Lewis 1969 and Coder 1969. Lewis misses the dialectical nature of the argument.Lucas, J. R. 1970. The Freedom of the Will. Oxford University Press. (Cited by 2 | Google)
Lucas, J. R. 1976. This Godel is killing me: A rejoinder. Philosophia 6:145-8. (Google)
Contra Hutton, we know -- even if fallibly -- that we are consistent.Lucas, J. R. 1984. Lucas against mechanism II: A rejoinder. Canadian Journal of Philosophy 14:189-91. (Google)
Reply to Lewis 1979.Lucas, J. R. 1996. Mind, machines and Godel: A retrospect. In (P. Millican & A. Clark, eds) Machines and Thought. Oxford University Press. (Google)
Addresses all the counterarguments. Fun.Lyngzeidetson, A. E. & Solomon, M. K. 1994. Abstract complexity theory and the mind-machine problem. British Journal for the Philosophy of Science 45:549-54. (Google)
Lyngzeidetson, A. 1990. Massively parallel distributed processing and a computationalist foundation for cognitive science. British Journal for the Philosophy of Science 41. (Google)
A Connection Machine might escape the Lucas argument. Bizarre.Martin, J. & Engleman, K. 1990. The mind's I has two eyes. Philosophy 510-16. (Google)
Contra Hofstadter: Lucas can believe his Whitely sentence.Maudlin, T. 1996. Between the motion and the act... Psyche 2:40-51. (Google)
McCall, S. 1999. Can a Turing machine know that the Godel sentence is true? Journal of Philosophy 96:525-32. (Google)
McCall, S. 2001. On "seeing" the truth of the Godel sentence. Facta Philosophica 3:25-30. (Google)
McCullough, D. 1996. Can humans escape Godel? Psyche 2:57-65. (Google)
McDermott, D. 1996. [Star] Penrose is wrong. Psyche 2:66-82. (Google)
Nelson, E. 2002. Mathematics and the mind. In (K. Yasue, M. Jibu, & T. Senta, eds) No Matter, Never Mind. John Benjamins. (Cited by 1 | Google)
Penrose, R. 1989. The Emperor's New Mind. Oxford University Press. (Google)
We are non-algorithmic as we can see Godel sentences of any algorithm.Penrose, R. 1990. Precis of The Emperor's New Mind. Behavioral and Brain Sciences 13:643-705.
Much debate over the "non-algorithmic insight" in seeing Godel sentences.Penrose, R. 1992. Setting the scene: The claim and the issues. In (D. Broadbent, ed) The Simulation of Human Intelligence. Blackwell. (Google)
An argument from the halting problem to the nonalgorithmicity of mathematical thought. Addresses objections: that the algorithm is unknowable, unsound, everchanging, environmental, or random. New physical laws may be involved.Penrose, R. 1994. Shadows of the Mind. Oxford University Press. (Cited by 482 | Google)
Penrose, R. 1996. Beyond the doubting of a shadow. Psyche 2:89-129. (Cited by 10 | Google)
A reply to Chalmers, Feferman, Maudlin, McDermott, etc.Piccinini, G. 2003. Alan Turing and the mathematical objection. Minds and Machines 13:23-48. (Cited by 7 | Google)
Priest, G. 1994. Godel's theorem and the mind... again. In (M. Michael & J. O'Leary-Hawthorne, eds) Philosophy in Mind: The Place of Philosophy in the Study of Mind. Kluwer. (Google)
Putnam, H. 1985. Reflexive reflections. Erkenntnis 22:143-153. (Cited by 5 | Google)
A generalized Godelian argument: if our prescriptive inductive competence is formalizable, then we could not know that such a formalization is correct.Raatikainen, P. 2002. McCall's Godelian argument is invalid. Facta Philosophica 4:167-69. (Google)
Redhead, M. 2004. Mathematics and the mind. British Journal for the Philosophy of Science 55. (Cited by 1 | Google)
Robinson, W. S. 1992. Penrose and mathematical ability. Analysis 52:80-88. (Google)
Penrose's argument depends on our knowledge of the validity of the algorithm we use, and here he equivocates between conscious and unconscious algorithms.Schurz, G. 2002. McCall and Raatikainen on mechanism and incompleteness. Facta Philosophica 4:171-74. (Google)
Slezak, P. 1982. Godel's theorem and the mind. British Journal for the Philosophy of Science 33:41-52. (Google)
General analysis; Lucas commits type/token error; self-ref paradoxes.Slezak, P. 1983. Descartes's diagonal deduction. British Journal for the Philosophy of Science 34:13-36. (Google)
Cogito was a diagonal argument; connection to Godel, Lucas, Minsky, Nagel.Smart, J. J. C. 1961. Godel's theorem, Church's theorem, and mechanism. Synthese 13:105-10. (Google)
A machine could escape the Godelian argument by inductively ascertaining its own syntax. With comments on the relevance of ingenuity.Tymoczko, T. 1991. Why I am not a Turing Machine: Godel's theorem and the philosophy of mind. In (J. Garfield, ed) Foundations of Cognitive Science. Paragon House. (Google)
Weak defense of Lucas; response to Putnam, Bowie, Dennett.Wang, H. 1974. From Mathematics to Philosophy. London. (Cited by 56 | Google)
Webb, J. 1968. Metamathematics and the philosophy of mind. Philosophy of Science 35:156-78. (Cited by 1 | Google)
Webb, J. 1980. Mechanism, Mentalism and Metamathematics. Kluwer. (Cited by 23 | Google)
Whitely, C. 1962. Minds, machines and Godel: A reply to Mr. Lucas. Philosophy 37:61-62. (Google)
Humans get trapped too: "Lucas cannot consistently assert this formula".Yu, Q. 1992. Consistency, mechanicalness, and the logic of the mind. Synthese 90:145-79. (Cited by 3 | Google)
Searle, J. R. 1980. Minds, brains and programs. Behavioral and Brain Sciences 3:417-57. (Cited by 703 | Google)
Implementing a program is not sufficient for mentality, as someone could e.g. implement a "Chinese-speaking" program without understanding Chinese. So strong AI is false, and no program is sufficient for consciousness.Searle, J. R. 1984. Minds, Brains and Science. Harvard University Press. (Cited by 199 | Google)
Axiomatizes the argument: Syntax isn't sufficient for semantics, programs are syntactic, minds are semantic, so no program is sufficient for mind.Searle, J. R. 1987. Minds and brains without programs. In (C. Blakemore, ed) Mindwaves. Blackwell. (Cited by 18 | Google)
More on the arguments against AI, e.g. the Chinese room and considerations about syntax and semantics. Mind is a high-level physical property of brain.Searle, J. R. 1990. Is the brain's mind a computer program? Scientific American 262(1):26-31. (Google)
On the status of the Chinese Room argument, ten years on.Searle, J. R. 2002. Twenty-one years in the Chinese room. In (J. Preston & M. Bishop, eds) Views into the Chinese Room: New Essays on Searle and Artificial Intelligence. Oxford University Press. (Cited by 3 | Google)
Adam, A. 2003. Cyborgs in the Chinese room: Boundaries transgressed and boundaries blurred. In (J. Preston & M. Bishop, eds) Views into the Chinese Room: New Essays on Searle and Artificial Intelligence. Oxford University Press. (Google)
Aleksander, I. 2003. Neural depictions of "world" and "self": Bringing computational understanding into the Chinese room. In (J. Preston & M. Bishop, eds) Views into the Chinese Room: New Essays on Searle and Artificial Intelligence. Oxford University Press. (Google)
Anderson, D. 1987. Is the Chinese room the real thing? Philosophy 62:389-93. (Cited by 4 | Google)
Boden, M. 1988. Escaping from the Chinese Room. In Computer Models of Mind. Cambridge University Press. (Cited by 7 | Google)
A procedural account of how computers might have understanding and semantics.Ben-Yami, H. 1993. A note on the Chinese room. Synthese 95:169-72. (Cited by 2 | Google)
A fully functional Chinese room is impossible, as it (for instance) could not say what the time is.Block, N. 2003. Searle's arguments against cognitive science. In (J. Preston & M. Bishop, eds) Views into the Chinese Room: New Essays on Searle and Artificial Intelligence. Oxford University Press. (Google)
Bringsjord, S. & Noel, R. 2003. Real robots and the missing thought-experiment in the Chinese room dialectic. In (J. Preston & M. Bishop, eds) Views into the Chinese Room: New Essays on Searle and Artificial Intelligence. Oxford University Press. (Google)
Bynum, T. W. 1985. Artificial intelligence, biology, and intentional states. Metaphilosophy 16:355-77. (Cited by 7 | Google)
A chess-playing machine embodied as a robot could have intentional states. Reference requires input/output, computation, and context.Cam, P. 1990. Searle on strong AI. Australasian Journal of Philosophy 68:103-8. (Cited by 2 | Google)
Criticizes Searle's "conclusion" that brains are needed for intentionality, notes that even a homunculus has intentional states. A misinterpretation.Carleton, L. 1984. Programs, language understanding, and Searle. Synthese 59:219-30. (Cited by 8 | Google)
Arguing against Searle on a number of fronts, somewhat unconvincingly.Chalmers, D. J. 1992. Subsymbolic computation and the Chinese Room. In (J. Dinsmore, ed) The Symbolic and Connectionist Paradigms: Closing the Gap. Lawrence Erlbaum. (Cited by 23 | Google)
Gives an account of symbolic vs. subsymbolic computation, and argues that the latter is less vulnerable to the Chinese-room intuition, as representations there are not computational tokens.Churchland, P. M. & Churchland, P. S. 1990. Could a machine think? Scientific American 262(1):32-37. (Cited by 62 | Google)
Artificial mentality is possible, not through classical AI but through brain-like AI. Argues the syntax/semantics point using an analogy with electromagnetism and luminance.Cohen, L. J. 1986. What sorts of machines can understand the symbols they use? Aristotelian Society Supplement 60:81-96. (Cited by 3 | Google)
Cole, D. J. 1984. Thought and thought experiments. Philosophical Studies 45:431-44. (Cited by 10 | Google)
Lots of thought experiments like Searle's, against Searle. Searle's argument is like Leibniz's "mill" argument, with similar level confusions. Nice but patchy.Cole, D. J. 1991. Artificial intelligence and personal identity. Synthese 88:399-417. (Cited by 9 | Google)
In the Chinese room, neither the person nor the system understands: a virtual person does. This person isn't the system, just as a normal person isn't a body. Follows from the "Kornese" room, which has two distinct understanders.Cole, D. J. 1991. Artificial minds: Cam on Searle. Australasian Journal of Philosophy 69:329-33. (Cited by 3 | Google)
Cole, D. J. 1994. The causal powers of CPUs. In (E. Dietrich, ed) Thinking Computers and Virtual Persons. Academic Press. (Cited by 1 | Google)
Copeland, B. J. 1993. The curious case of the Chinese gym. Synthese 95:173-86. (Cited by 7 | Google)
Advocates the systems reply, and criticizes Searle's "Chinese Gym" response to connectionism: Searle (like those he accuses) confuses a simulation with the thing being simulated. Nice.Copeland, B. J. 2003. The Chinese room from a logical point of view. In (J. Preston & M. Bishop, eds) Views into the Chinese Room: New Essays on Searle and Artificial Intelligence. Oxford University Press. (Cited by 3 | Google)
Coulter, J. & Sharrock, S. 2003. The hinterland of the Chinese room. In (J. Preston & M. Bishop, eds) Views into the Chinese Room: New Essays on Searle and Artificial Intelligence. Oxford University Press. (Google)
Dennett, D. C. 1987. Fast thinking. In The Intentional Stance. MIT Press. (Cited by 11 | Google)
Argues with Searle on many points. A little weak.Double, R. 1983. Searle, programs and functionalism. Nature and System 5:107-14. (Cited by 3 | Google)
The homunculus doesn't have access to the system's intentionality. The syntax/semantics relation is like the neurophysiology/mind relation.Dyer, M. 1990. Intentionality and computationalism: minds, machines, Searle and Harnad. Journal of Experimental and Theoretical Artificial Intelligence 2:303-19. (Cited by 15 | Google)
Reply to Searle/Harnad: systems reply, level confusions, etc.Dyer, M. 1990. Finding lost minds. Journal of Experimental and Theoretical Artificial Intelligence 2:329-39. (Google)
Reply to Harnad 1990: symbols, other minds, physically embodied algorithms.Fields, C. 1984. Double on Searle's Chinese Room. Nature and System 6:51-54. (Google)
Double's argument implies that the brain isn't the basis of intentionality.Fisher, J. 1988. The wrong stuff: Chinese rooms and the nature of understanding. Philosophical Investigations 11:279-99. (Cited by 2 | Google)
Fodor, J. A. 1991. Yin and Yang in the Chinese Room. In (D. Rosenthal, ed) The Nature of Mind. Oxford University Press. (Cited by 2 | Google)
The Chinese room isn't even implementing a Turing machine, because it doesn't use proximal causation. With a reply by Searle.Globus, G. 1991. Deconstructing the Chinese room. Journal of Mind and Behavior 12:377-91. (Cited by 1 | Google)
Gozzano, S. 1995. Consciousness and understanding in the Chinese room. Informatica 19:653-56. (Google)
Hanna, P. 1985. Causal powers and cognition. Mind 94:53-63. (Cited by 2 | Google)
Argues that Searle is confused, and underestimates computers. Weak.Harnad, S. 1989. Minds, machines and Searle. Journal of Experimental and Theoretical Artificial Intelligence 1:5-25. (Cited by 76 | Google)
Non-symbolic function is necessary for mentality. Trying hard to work out a theory of why the Chinese Room shows what it does. Nice but wrong.Harnad, S. 1990. Lost in the hermeneutical hall of mirrors. Journal of Experimental and Theoretical Artificial Intelligence 2:321-27. (Google)
Reply to Dyer 1990: on the differences between real and as-if intentionality.Harnad, S. 2003. Minds, machines, and Searle 2: What's right and wrong about the Chinese room argument. In (J. Preston & M. Bishop, eds) Views into the Chinese Room: New Essays on Searle and Artificial Intelligence. Oxford University Press. (Cited by 3 | Google)
Hauser, L. 1997. Searle's Chinese box: Debunking the Chinese room argument. Minds and Machines 7:199-226. (Google)
Hauser, L. 2003. Nixin' goes to China. In (J. Preston & M. Bishop, eds) Views into the Chinese Room: New Essays on Searle and Artificial Intelligence. Oxford University Press. (Cited by 3 | Google)
Hayes, P. , Harnad, S. , Perlis, D. & Block, N. 1992. Virtual symposium on virtual mind. Minds and Machines 2. (Cited by 15 | Google)
A discussion about the Chinese room, symbol grounding, and so on.Hofstadter, D. R. 1981. Reflections on Searle. In (D. Hofstadter & D. Dennett, eds) The Mind's I, pp. 373-382. Basic Books. (Cited by 1 | Google)
Searle is committing a level confusion, and understates the complexity of the case. We can move from the CR to a brain (with a demon) by twiddling knobs, and the systems reply should work equally well in both cases.Jacquette, D. 1989. Searle's intentionality thesis. Synthese 80:267-75. (Google)
Searle's view implies that intentional causation is not efficient causation.Jacquette, D. 1989. Adventures in the Chinese Room. Philosophy and Phenomenological Research 49:605-23. (Cited by 2 | Google)
If we had microfunctional correspondence, the CR argument would fail. With points about the status of abstract/biological intentionality. A bit weak.Searle, J. R. 1989. Reply to Jacquette. Philosophy and Phenomenological Research 49:701-8. (Cited by 4 | Google)
Jacquette misses the point of the argument. Also, biological and abstract intentionality are quite compatible.Jacquette, D. 1990. Fear and loathing (and other intentional states) in Searle's Chinese Room. Philosophical Psychology 3:287-304. (Google)
Reply to Searle on CR, central control, biological intentionality & dualism.Jahren, N. 1990. Can semantics be syntactic? Synthese 82:309-28. (Cited by 1 | Google)
Against Rapaport's Korean Room argument -- syntax isn't enough.King, D. 2001. Entering the Chinese room with Castaneda's principle (p). Philosophy Today 45:168-174.
Korb, K. 1991. Searle's AI program. Journal of Experimental and Theoretical Artificial Intelligence 3:283-96. (Google)
The Chinese room doesn't succeed as an argument about semantics. At best it might succeed as an argument about consciousness.Maloney, J. C. 1987. The right stuff. Synthese 70:349-72. (Cited by 7 | Google)
Defends Searle against all kinds of objections.Melnyk, A. 1996. Searle's abstract argument against strong AI. Synthese 108:391-419. (Google)
Moor, J. H. 1988. The pseudorealization fallacy and the Chinese Room argument. In (J. Fetzer, ed) Aspects of AI. D. Reidel. (Cited by 2 | Google)
Computational systems must also meet performance criteria.Newton, N. 1989. Machine understanding and the Chinese Room. Philosophical Psychology 2:207-15. (Cited by 1 | Google)
A program can possess intentionality, even if not consciousness.Obermeier, K. K. 1983. Wittgenstein on language and artificial intelligence: The Chinese-room thought-experiment revisited. Synthese 56:339-50. (Cited by 2 | Google)
Penrose, R. 2003. Consciousness, computation, and the Chinese room. In (J. Preston & M. Bishop, eds) Views into the Chinese Room: New Essays on Searle and Artificial Intelligence. Oxford University Press. (Cited by 1 | Google)
Pfeifer, K. 1992. Searle, strong AI, and two ways of sorting cucumbers. Journal of Philosophical Research 17:347-50. (Cited by 1 | Google)
Preston, J. & Bishop, M. 2002. Views into the Chinese Room: New Essays on Searle and Artificial Intelligence. Oxford University Press. (Cited by 4 | Google)
Proudfoot, D. 2003. Wittgenstein's anticipation of the Chinese room. In (J. Preston & M. Bishop, eds) Views into the Chinese Room: New Essays on Searle and Artificial Intelligence. Oxford University Press. (Google)
Rapaport, W. 1984. Searle's experiments with thought. Philosophy of Science 53:271-9. (Google)
Comments on Cole, and some general material on syntax and semantics.Rey, G. 1986. What's really going on in Searle's `Chinese Room'. Philosophical Studies 50:169-85. (Google)
Recommends the systems reply, and a causal account of semantics. Discusses the relevance of wide and narrow notions of content, and the tension between Searle's positive and negative proposals.Rey, G. 2003. Searle's misunderstandings of functionalism and strong AI. In (J. Preston & M. Bishop, eds) Views into the Chinese Room: New Essays on Searle and Artificial Intelligence. Oxford University Press. (Google)
Roberts, L. 1990. Searle's extension of the Chinese Room to connectionist machines. Journal of Experimental and Theoretical Artificial Intelligence 2:185-7. (Google)
In arguing against the relevance of the serial/parallel distinction to mental states, Searle becomes a formalist. A nice point.Russow, L-M. 1984. Unlocking the Chinese Room. Nature and System 6:221-8. (Cited by 1 | Google)
Searle's presence in the room destroys the integrity of the system, so that it is no longer a proper implementation of the program.Seidel, A. 1988. Searle on the biological basis of cognition. Analysis 48:26-28. (Google)
Seidel, A. 1989. Chinese Rooms A, B and C. Pacific Philosophical Quarterly 20:167-73.
A person running the program, with interpretations at hand, would understand. Point-missing.Sharvy, R. 1985. Searle on programs and intentionality. Canadian Journal of Philosophy Supplement 11:39-54. (Cited by 1 | Google)
Argues against Searle, but misses the point for the most part.Simon, H. A. & Eisenstadt, S. A. 2003. A Chinese room that understands. In (J. Preston & M. Bishop, eds) Views into the Chinese Room: New Essays on Searle and Artificial Intelligence. Oxford University Press. (Cited by 2 | Google)
Sloman, A. 1986. Did Searle attack Strong Strong AI or Weak Strong AI? In (Cohn & Thomas, eds) Artificial Intelligence and its Applications. Chichester. (Cited by 3 | Google)
Suits, D. 1989. Out of the Chinese Room. Computing and Philosophy Newsletter 4:1-7. (Cited by 1 | Google)
Story about homunculi within homunculi. Fun.Taylor, J. G. 2003. Do virtual actions avoid the Chinese room? In (J. Preston & M. Bishop, eds) Views into the Chinese Room: New Essays on Searle and Artificial Intelligence. Oxford University Press. (Cited by 2 | Google)
Teng, N. Y. 2000. A cognitive analysis of the Chinese room argument. Philosophical Psychology 13:313-24. (Google)
Thagard, P. 1986. The emergence of meaning: An escape from Searle's Chinese Room. Behaviorism 14:139-46. (Cited by 6 | Google)
Get semantics computationally via induction and functional roles.Wakefield, J. 2003. The Chinese room argument reconsidered: Essentialism, indeterminacy, and strong AI. Minds and Machines 13:285-319. (Google)
Weiss, T. 1990. Closing the Chinese room. Ratio 3:165-81. (Cited by 4 | Google)
Searle-in-the-room isn't in a position to know about the system's first-person states. Intrinsic intentionality is an incoherent notion.Wheeler, M. 2003. Changes in the rules: Computers, dynamic systems, and Searle. In (J. Preston & M. Bishop, eds) Views into the Chinese Room: New Essays on Searle and Artificial Intelligence. Oxford University Press. (Google)
Whitmer, J. M. 1983. Intentionality, artificial intelligence, and the causal powers of the brain. Auslegung 10:194-210. (Google)
Defending Searle's position, with remarks on the "causal powers" argument.
Adams, W. 2004. Machine consciousness: Plausible idea or semantic distortion? Journal of Consciousness Studies 11(9):46-56. (Google)
Aleksander, I. & Dunmall, B. 2003. Axioms and tests for the presence of minimal consciousness in agents I: Preamble. Journal of Consciousness Studies 10(4-5):7-18. (Cited by 6 | Google)
Angel, L. 1989. How to Build a Conscious Machine. Westview Press. (Cited by 2 | Google)
Angel, L. 1994. Am I a computer? In (E. Dietrich, ed) Thinking Computers and Virtual Persons. Academic Press. (Google)
Arrington, R. 1999. Machines, consciousness, and thought. Idealistic Studies 29:231-243. (Google)
Barnes, E. 1991. The causal history of computational activity: Maudlin and Olympia. Journal of Philosophy 88:304-16. (Cited by 2 | Google)
Response to Maudlin 1989. True computation needs active, not passive causation, so Maudlin's machine isn't really computing.Birnbacher, D. 1995. Artificial consciousness. In (T. Metzinger, ed) Conscious Experience. Ferdinand Schoningh. (Google)
Bringsjord, S. 1992. What Robots Can and Can't Be. Kluwer. (Cited by 63 | Google)
Bringsjord, S. 1994. Could, how could we tell if, and should -- androids have inner lives? In (K. M. Ford, C. Glymour, & P. Hayes, eds) Android Epistemology. MIT Press. (Cited by 14 | Google)
Buttazzo, G. 2001. Artificial consciousness: Utopia or real possibility?. Computer 34:24-30. (Cited by 11 | Google)
Caplain, G. 1995. Is consciousness a computational property? Informatica 19:615-19. (Google)
Coles, L. S. 1993. Engineering machine consciousness. AI Expert 8:34-41. (Google)
Cotterill, R. 2003. Cyberchild: A Simulation test-bed for consciousness studies. Journal of Consciousness Studies 10(4-5):31-45. (Cited by 2 | Google)
D'Aquili, E. G. & Newberg, A. B. 1996. Consciousness and the machine. Zygon 31:235-52. (Google)
Danto, A. 1960. On consciousness in machines. In (S. Hook, ed) Dimensions of Mind. New York University Press. (Google)
Dennett, D. C. 1994. The practical requirements for making a conscious robot. Philosophical Transactions of the Royal Society A 349:133-46. (Cited by 20 | Google)
Dennett, D. C. 1995. Cog: Steps toward consciousness in robots. In (T. Metzinger, ed) Conscious Experience. Ferdinand Schoningh. (Google)
Franklin, S. 2003. A conscious artifact? Journal of Consciousness Studies 10(4-5):47-66. (Google)
Glennan, S. S. 1995. Computationalism and the problem of other minds. Philosophical Psychology 8:375-88. (Google)
Gunderson, K. 1968. Robots, consciousness and programmed behaviour. British Journal for the Philosophy of Science 19:109-22. (Google)
Gunderson, K. 1969. Cybernetics and mind-body problems. Inquiry 12:406-19. (Google)
Gunderson, K. 1971. Mentality and Machines. Doubleday. (Cited by 13 | Google)
Harnad, S. 2003. Can a machine be conscious? How? Journal of Consciousness Studies 10(4-5):67-75. (Cited by 2 | Google)
Hillis, D. 1998. Can a machine be conscious? In (S. Hameroff, A. Kaszniak, & A. Scott, eds) Toward a Science of Consciousness II. MIT Press. (Google)
Holland, O. & Goodman, R. 2003. Robots with internal models: A route to machine consciousness? Journal of Consciousness Studies 10(4-5):77-109. (Cited by 3 | Google)
Kirk, R. 1986. Sentience, causation and some robots. Australasian Journal of Philosophy 64:308-21. (Google)
One could model brain states with monadic states and appropriate connections. But surely that's not intelligent -- the causation has the wrong form. Nice.Kitamura, T. , Tahara, T. , & Asami, K. 2000. How can a robot have consciousness? Advanced Robotics 14:263-275. (Cited by 2 | Google)
Kitamura, T. 2002. What is the self of a robot? On a consciousness architecture for a mobile robot as a model of human consciousness. In (K. Yasue, M. Jibu, & T. Senta, eds) No Matter, Never Mind. John Benjamins. (Google)
Lucas, J. R. 1994. A view of one's own (conscious machines). Philosophical Transactions of the Royal Society, Series A 349:147-52.
Lycan, W. G. 1998. Qualitative experience in machines. In (T. Bynum & J. Moor, eds) How Computers are Changing Philosophy. Blackwell. (Google)
Maudlin, T. 1989. Computation and consciousness. Journal of Philosophy 86:407-32. (Cited by 16 | Google)
Computational state is not sufficient for consciousness, as it can be instantiated by a mostly inert object. A nice thought-experiment, raising questions about the relevance of counterfactuals to consciousness.McCarthy, J. 1996. Making robots conscious of their mental states. In (S. Muggleton, ed) Machine Intelligence 15. Oxford University Press. (Cited by 45 | Google)
McGinn, C. 1987. Could a machine be conscious? In (C. Blakemore & S. Greenfield, ed) Mindwaves. Blackwell. Reprinted in The Problem of Consciousness (Blackwell, 1980). (Cited by 1 | Google)
Of course, as we are machines. But what sort of machines are conscious, and in virtue of what properties? Remarks on artefacts, life, functionalism, and computationalism. So far, we don't know what makes the brain conscious.Prinz, J. J. 2003. Level-headed mysterianism and artificial experience. Journal of Consciousness Studies 10(4-5):111-132. (Cited by 2 | Google)
Puccetti, R. 1967. On thinking machines and feeling machines. British Journal for the Philosophy of Science 18:39-51. (Google)
Machines can think but can't feel, so aren't persons.Putnam, H. 1964. Robots: machines or artificially created life? Journal of Philosophy 61:668-91. Reprinted in Mind, Language, and Reality (Cambridge University Press, 1975). (Cited by 5 | Google)
Various arguments and counter-arguments re machine consciousness and civil liberties. Problems of machine consciousness are analogous to problems of human consciousness. The structural basis of the two may well be the same.Putnam, H. 1967. The mental life of some machines. In (H. Castaneda, ed) Intentionality, Minds and Perception. Wayne State University Press. Reprinted in Mind, Language, and Reality (Cambridge University Press, 1975). (Cited by 17 | Google)
More on TMs: explaining their psychology via preference functions.Schlagel, R. 1999. Why not artificial consciousness or thought? Minds and Machines 9:3-28. (Cited by 4 | Google)
Scriven, M. 1953. The mechanical concept of mind. Mind. (Cited by 6 | Google)
To speak of a conscious machine is to commit a semantic mistake. Consciousness presupposes life and non-mechanism. Later retracted.Sloman, A. & Chrisley, R. 2003. Virtual machines and consciousness. Journal of Consciousness Studies 10(4-5):133-172. (Cited by 11 | Google)
Stubenberg, L. 1992. What is it like to be Oscar? Synthese 90:1-26. (Google)
Argues that AI systems like Pollock's Oscar needn't be conscious. Blindsight tells us that complex perceptual processing can go on unconsciously.Thompson, D. 1965. Can a machine be conscious? British Journal for the Philosophy of Science 16:36. (Google)
Accepting machine consciousness would have few philosophical consequences, whereas rejecting it would tend to commit one to epiphenomenalism.van de Vete, D. 1971. The problem of robot consciousness. Philosophy and Phenomenological Research 32:149-65. (Google)
Ziff, P. 1959. The feelings of robots. Analysis. (Cited by 4 | Google)
Of course robots can't think: they're not alive, so this gives us good reason not to rely on behavior. With replies by J.J.C. Smart, N. Smart.
Bringsjord, S. 1998. Cognition is not computation: The argument from irreversibility. Synthese. (Cited by 8 | Google)
Burks, A. W. 1973. Logic, computers, and men. Proceedings and Addresses of the American Philosophical Association 46:39-57. (Cited by 3 | Google)
Arguing that a finite deterministic automaton can perform all natural human functions. With remarks on the logical organization of computers.Cohen, L. J. 1955. Can there be artificial minds? Analysis 16:36-41. (Google)
Subservience to known or knowable rules is incompatible with mentality.Copeland, B. J. 2000. Narrow versus wide mechanism: Including a re-examination of Turing's views on the mind-machine issue. Journal of Philosophy 97:5-33. (Cited by 23 | Google)
Dennett, D. C. 1985. Can machines think? In How We Know (Shafto).
Defends the Turing Test, among other things.Dretske, F. 1985. Machines and the mental. Proceedings and Addresses of the American Philosophical Association 59:23-33. (Cited by 16 | Google)
Dretske, F. 1993. Can intelligence be artificial? Philosophical Studies 71:201-16. (Cited by 1 | Google)
Intelligence requires not just action or thought, but the governance of action by thought, which requires a history. "Wired-up" systems lack the explanatory connection between thought and action, so are not intelligent.Dreyfus, H. L. 1972. What Computers Can't Do. Harper and Row. (Cited by 193 | Google)
Computers follow rules, people don't.Hauser, L. 1993. Why isn't my pocket calculator a thinking thing? Minds and Machines 3:3-10. (Google)
Henley, T. B. 1990. Natural problems and artificial intelligence. Behavior and Philosophy 18:43-55. (Cited by 2 | Google)
On the philosophical importance of criteria for intelligence. With remarks on Searle, the Turing test, attitudes to AI, and ethical considerations.Kearns, J. T. 1997. Thinking machines: Some fundamental confusions. Minds and Machines 7:269-87. (Cited by 5 | Google)
Lanier, J. 1998. Three objections to the idea of artificial intelligence. In (S. Hameroff, A. Kaszniak, & A. Scott, eds) Toward a Science of Consciousness II. MIT Press. (Google)
Mackay, D. M. 1951. Mind-life behavior in artifacts. British Journal for the Philosophy of Science 2:105-21. (Google)
Mackay, D. M. 1952. Mentality in machines. Aristotelian Society Supplement 26:61-86. (Cited by 1 | Google)
Manning, R. C. 1987. Why Sherlock Holmes can't be replaced by an expert system. Philosophical Studies 51:19-28. (Cited by 3 | Google)
An expert system would lack Holmes' ability to raise the right questions, sort out relevant data, and determine what data are in need of explanation.Mays, W. 1952. Can machines think? Philosophy 27:148-62. (Cited by 2 | Google)
McCarthy, J. 1979. Ascribing mental qualities to machines. In (M. Ringle, ed) Philosophical Perspectives in Artificial Intelligence. Humanities Press. (Cited by 117 | Google)
Negley, G. 1951. Cybernetics and theories of mind. Journal of Philosophy 48:574-82. (Google)
Preston, B. 1995. The ontological argument against the mind-machine hypothesis. Philosophical Studies 80:131-57. (Google)
Lucas, Searle, and Penrose all fall prey to "dual-description" fallacies.Proudfoot, D. 2004. The implications of an externalist theory of rule-following behavior for robot cognition.. Minds and Machines 14:283-308. (Google)
Puccetti, R. 1966. Can humans think? Analysis. (Google)
Rapaport, W. 1993. Because mere calculating isn't thinking: Comments on Hauser's "Why isn't my pocket calculator a thinking thing?". Minds and machines 3:11-20. (Cited by 2 | Google)
Ronald, E. & Sipper, M. 2001. Intelligence is not enough: On the socialization of talking machines. Minds and Machines 11:567-576. (Cited by 2 | Google)
Scriven, M. 1960. The compleat robot: A prolegomena to androidology. In (S. Hook, ed) Dimensions of Mind. New York University Press. (Google)
A machine could possess every characteristic of human thought: e.g. freedom, creativity, learning, understanding, perceiving, feeling.Spilsbury, R. J. 1952. Mentality in machines. Aristotelian Society Supplement 26:27-60. (Google)
Cummins, R. 1996. Why there is no symbol grounding problem? In Representations, Targets, and Attitudes. MIT Press. (Google)
Harnad, S. 1990. The symbol grounding problem. Physica D 42:335-346. (Cited by 668 | Google)
AI symbols are empty and meaningless. They need to be "grounded" in something, e.g. sensory projection. Maybe connectionism can do the trick?Harnad, S. 1992. Connecting object to symbol in modeling cognition. In (A. Clark & R. Lutz, eds) Connectionism in Context. Springer-Verlag. (Cited by 54 | Google)
On the limitations of symbol systems, and the potential for grounding symbols in sensory icons and categorical perception, e.g. with neural networks.Kosslyn, S. M. & Hatfield, G. 1984. Representation without symbol systems. Social Research 51:1019-1045. (Cited by 13 | Google)
Harnad, S. 2002. Symbol grounding and the origin of language. In (M. Scheutz, ed) Computationalism: New Directions. MIT Press. (Cited by 7 | Google)
MacDorman, K. F. 1997. How to ground symbols adaptively. In (S. O'Nuillain, P. McKevitt, & E. MacAogain, eds) Two Sciences of Mind. John Benjamins. (Cited by 1 | Google)
Newell, A. 1980. Physical symbol systems. Cognitive Science 4:135-83. (Cited by 250 | Google)
Newell, A. & Simon, H. A. 1981. Computer science as empirical inquiry: Symbols and search. Communications of the Association for Computing Machinery 19:113-26. Reprinted in (J. Haugeland, ed) Mind Design. MIT Press. (Cited by 387 | Google)
On computer science, AI, & the Physical Symbol System Hypothesis.Robinson, W. S. 1995. Brain symbols and computationalist explanation. Minds and Machines 5:25-44. (Cited by 3 | Google)
Sun, R. 2000. Symbol grounding: a new look at an old idea. Philosophical Psychology 13:149-172. (Cited by 29 | Google)
Fodor, J. A. 1978. Tom Swift and his procedural grandmother. Cognition 6:229-47. Reprinted in RePresentations (MIT Press, 1980). (Cited by 17 | Google)
Against procedural semantics; it's a rerun of verificationism.Hadley, R. F. 1990. Truth conditions and procedural semantics. In (P. Hanson, ed) Information, Language and Cognition. University of British Columbia Press. (Google)
Johnson-Laird, P. 1977. Procedural semantics. Cognition 5:189-214. (Cited by 20 | Google)
Johnson-Laird, P. 1978. What's wrong with Grandma's guide to procedural semantics: A reply to Jerry Fodor. Cognition 9:249-61. (Google)
McDermott, D. 1978. Tarskian semantics, or no notation without denotation. Cognitive Science 2:277-82. (Cited by 20 | Google)
On the virtues of denotational semantics for AI. Notation without denotation, as found in many AI systems, leads to castles in the air.Perlis, D. 1991. Putting one's foot in one's head -- Part 1: Why. Nous 25:435-55. (Google)
Perlis, D. 1994. Putting one's foot in one's head -- Part 2: How. In (E. Dietrich, ed) Thinking Computers and Virtual Persons. Academic Press. (Google)
Rapaport, W. J. 1988. Syntactic semantics: Foundations of computational natural language understanding. In (J. Fetzer, ed) Aspects of AI. Kluwer. (Cited by 31 | Google)
Rapaport, W. J. 1995. Understanding understanding: Syntactic semantics and computational cognition. Philosophical Perspectives 9:49-88. (Cited by 15 | Google)
Smith, B. 1988. On the semantics of clocks. In (J. Fetzer, ed) Aspects of AI. Kluwer. (Google)
Smith, B. 1987. The correspondence continuum. CSLI-87-71. (Cited by 27 | Google)
Wilks, Y. 1982. Some thoughts on procedural semantics. In (W. Lehnert, ed) Strategies for Natural Language Processing. Lawrence Erlbaum. (Cited by 5 | Google)
Wilks, Y. 1990. Form and content in semantics. Synthese 82:329-51. (Cited by 5 | Google)
Criticism of McDermott's views on semantics, logic and natural language.Winograd, T. 1985. Moving the semantic fulcrum. Linguistics and Philosophy 8:91-104. (Cited by 6 | Google)
Woods, W. 1981. Procedural semantics as a theory of meaning. In (A. Joshi, B. Weber, & I. Sag) Elements of Discourse Understanding. Cambridge University Press. (Cited by 26 | Google)
Woods, W. 1986. Problems in procedural semantics. In (Z. Pylyshyn & W. Demopolous, eds) Meaning and Cognitive Structure. Ablex. (Google)
With commentaries by Haugeland, J. D. Fodor.
Clark, A. 1991. In defense of explicit rules. In (W. Ramsey, S. Stich, & D. Rumelhart, eds) Philosophy and Connectionist Theory. Lawrence Erlbaum. (Cited by 7 | Google)
Argues that we need explicit rules for flexibility, adaptibility, and representational redescription. With remarks on eliminativism.Cummins, R. 1986. Inexplicit information. In (M. Brand & R. Harnish, eds) The Representation of Knowledge and Belief. University of Arizona Press. (Cited by 12 | Google)
On various kinds of representation of knowledge or belief without explicit tokens: control-implicit, domain-implicit, and procedural information. The key distinction is representation vs. execution of a rule.Davies, M. 1995. Two notions of implicit rules. Philosophical Perspectives 9:153-83. (Cited by 13 | Google)
Hadley, R. F. 1990. Connectionism, rule-following, and symbolic manipulation. Proc AAAI. (Google)
Some rules are learnt so quickly that representation must be explicit.Hadley, R. F. 1993. Connectionism, explicit rules, and symbolic manipulation. Minds and Machines 3. (Cited by 11 | Google)
Hadley, R. F. 1995. The `explicit-implicit' distinction. Minds and Machines 5:219-42. (Cited by 20 | Google)
Kirsh, D. 1990. When is information explicitly represented? In (P. Hanson, ed) Information, Language and Cognition. University of British Columbia Press. (Cited by 38 | Google)
Skokowski, P. G. 1994. Can computers carry content "inexplicitly"? Minds and Machines 4:333-44. (Cited by 2 | Google)
Cummins' account of inexplicit information fails, as even "executed" rules must be represented in the system. With remarks on the Chinese room.
Bechtel, W. 1996. Yet another revolution: Defusing the dynamical system theorists' attack on mental representations. Manuscript. (Cited by 2 | Google)
Brooks, R. 1991. Intelligence without representation. Artificial Intelligence 47:139-159. (Cited by 1415 | Google)
We don't need explicit representation; the world can do the job instead. Use embodied, complete systems, starting simple and working incrementally.Clark, A. and Toribio, J. 1994. Doing without representing. Synthese 101:401-31. (Cited by 50 | Google)
A discussion of anti-representationalism in situated robotics and the dynamic systems movement (Brooks, Beer, van Gelder). These arguments appeal to overly simple domains, and a modest notion of representation survives.Keijzer, F. A. 1998. Doing without representations which specify what to do. Philosophical Psychology 11:269-302. (Cited by 11 | Google)
Kirsh, D. 1991. Today the earwig, tomorrow man? Artificial Intelligence 47:161-184. (Cited by 73 | Google)
van Gelder, T. 1995. What might cognition be if not computation? Journal of Philosophy 92:345-81. (Cited by 142 | Google)
Argues for a dynamic-systems conception of the mind that is non-computational and non-representational. Uses an analogy with the Watt steam governor to argue for a new kind of dynamic explanation.
Chrisley, R. L. 1994. Taking embodiment seriously: Nonconceptual content and robotics. In (K. M. Ford, C. Glymour, & P. Hayes, eds) Android Epistemology. MIT Press. (Cited by 5 | Google)
Dietrich, E. 1988. Computers, intentionality, and the new dualism. Manuscript. (Google)
Dreyfus, H. L. 1979. A framework for misrepresenting knowledge. In (M. Ringle, ed) Philosophical Perspectives in Artificial Intelligence. Humanities Press. (Cited by 4 | Google)
On the problems with context-free symbolic representation.Fields, C. 1994. Real machines and virtual intentionality: An experimentalist takes on the problem of representational content. In (E. Dietrich, ed) Thinking Computers and Virtual Persons. Academic Press. (Google)
Haugeland, J. 1981. Semantic engines: An introduction to mind design. In (J. Haugeland, ed) Mind Design. MIT Press. (Cited by 39 | Google)
Robinson, W. S. 1995. Direct representation. Philosophical Studies 80:305-22. (Cited by 1 | Google)
On Searle's critique of computational explanation, contrasted with Gallistel's use thereof. The real issue is computation on indirect vs. direct representations; direct computationalism is an attractive view.
Fodor, J. A. & Pylyshyn, Z. W. 1988. Connectionism and cognitive architecture. Cognition 28:3-71. (Cited by 769 | Google)
Connectionist models can't explain cognitive systematicity and productivity, as their representations lack compositional structure. The allures of connectionism are illusory; it's best used as an implementation strategy.Aizawa, K. 1997. Explaining systematicity. Mind and Language 12:115-36. (Cited by 10 | Google)
Aizawa, K. 1997. The role of the systematicity argument in classicism and connectionism. In (S. O'Nuallain, ed) Two Sciences of Mind. John Benjamins. (Cited by 1 | Google)
Aizawa, K. 1997. Exhibiting verses explaining systematicity: A reply to Hadley and Hayward. Minds and Machines 7:39-55. (Google)
Antony, M. V. 1991. Fodor and Pylyshyn on connectionism. Minds and Machines 1:321-41. (Cited by 1 | Google)
Fodor and Pylyshyn's argument is an invalid instance of inference to the best explanation, as there is much to explain than systematicity. Connectionism and classicism may be compatible even without implementation, in any case.Aydede, M. 1997. Language of thought: The connectionist contribution. Minds and Machines 7:57-101. (Cited by 15 | Google)
Butler, K. 1991. Towards a connectionist cognitive architecture. Mind and Language 6:252-72. (Cited by 9 | Google)
Connectionism can make do with unstructured representations, as long have they have the right causal relations between them.Butler, K. 1993. Connectionism, classical cognitivism, and the relation between cognitive and implementational levels of analysis. Philosophical Psychology 6:321-33. (Cited by 3 | Google)
Contra Chalmers 1993, F&P's argument doesn't apply at the implementational level. Contra Chater and Oaksford 1990, connectionism can't be purely implementational, but some implementational details can be relevant.Butler, K. 1993. On Clark on systematicity and connectionism. British Journal for the Philosophy of Science 44:37-44. (Google)
Argues against Clark on holism and the conceptual truth of systematicity.Butler, K. 1995. Compositionality in cognitive models: The real issue. Philosophical Studies 78:153-62. (Cited by 1 | Google)
Chalmers, D. J. 1990. Syntactic transformations on distributed representations. Connection Science 2:53-62. (Cited by 134 | Google)
An experimental demonstration that connectionist models can handle structure-sensitive operations in a non-classical way, transforming structured representations of active sentences to passive sentences.Chalmers, D. J. 1993. Connectionism and compositionality: Why Fodor and Pylyshyn were wrong. Philosophical Psychology 6:305-319. (Cited by 15 | Google)
Points out a structural flaw in F&P's argument, and traces the problem to a lack of appreciation of distributed representation. With some empirical results on structure sensitive processing, and some remarks on explanation.Chater, N. & Oaksford, M. 1990. Autonomy, implementation and cognitive architecture: A reply to Fodor and Pylyshyn. Cognition 34:93-107. (Cited by 21 | Google)
Implementation can make a difference at the algorithmic level.Christiansen, M. H. & Chater, N. 1994. Generalization and connectionist language learning. Mind and Language 9:273-87. (Cited by 29 | Google)
Cummins, R. 1996. Systematicity. Journal of Philosophy 93:591-614. (Cited by 10 | Google)
Fetzer, J. H. 1992. Connectionism and cognition: Why Fodor and Pylyshyn are wrong. In (A. Clark & R. Lutz, eds) Connectionism in Context. Springer-Verlag. (Cited by 9 | Google)
Fodor, J. A. & McLaughlin, B. P. 1990. Connectionism and the problem of systematicity: Why Smolensky's solution doesn't work. Cognition 35:183-205. (Cited by 118 | Google)
Smolensky's weak compositionality is useless; and tensor product architecture can't support systematicity, as nonexistent tokens can't play a causal role.Fodor, J. A. 1997. Connectionism and the problem of systematicity (continued): Why Smolensky's solution still doesn't work. Cognition 62:109-19. (Cited by 15 | Google)
Garcia-Carpintero, M. 1996. Two spurious varieties of compositionality. Minds and Machines 6:159-72. (Google)
Garfield, J. 1997. Mentalese not spoken here: Computation, cognition, and causation. Philosophical Psychology 10:413-35. (Cited by 4 | Google)
Guarini, M. 1996. Tensor products and split-level architecture: Foundational issues in the classicism-connectionism debate. Philosophy of Science 63:S239-47. (Google)
Hadley, R. F. 1997. Cognition, systematicity, and nomic necessity. Mind and Language 12:137-53. (Cited by 8 | Google)
Hadley, R. F. 1994. Systematicity in connectionist language learning. Mind and Language 9:247-72. (Cited by 52 | Google)
Argues that existing connectionist models do not achieve an adequate systematicity in learning; they fail to generalize to handle structures with novel constituents.Hadley, R. F. 1994. Systematicity revisited. Mind and Language 9:431-44. (Cited by 17 | Google)
Hadley, R. F. & Hayward, M. B. 1997. Strong semantic systematicity from Hebbian connectionist learning. Minds and Machines 7:1-55. (Google)
Hadley, R. F. 1997. Cognition, systematicity, and nomic necessity. Mind and Language 12:137-53. (Cited by 8 | Google)
Hadley, R. F. 1997. Explaining systematicity: A reply to Kenneth Aizawa. Minds and Machines 12:571-79. (Cited by 1 | Google)
Hawthorne, J. 1989. On the compatibility of connectionist and classical models. Philosophical Psychology 2:5-16. (Cited by 6 | Google)
Localist connectionist models may not be able to handle structured presentation, but appropriate distributed models can.Horgan, T. & Tienson, J. 1991. Structured representations in connectionist systems? In (Davis, ed) Connectionism: Theory and Practice. (Cited by 7 | Google)
A discussion of how connectionism might achieve "effective syntax" without implementing a classical system.Matthews, R. J. 1994. Three-concept monte: Explanation, implementation, and systematicity. Synthese 101:347-63. (Cited by 11 | Google)
F&P deal a sucker bet: on their terms, connectionism could never give a a non-implementational explanation of systematicity, as the notions are construed in a manner specific to classical architectures.Matthews, R. J. 1997. Can connectionists explain systematicity? Mind and Language 12:154-77. (Cited by 4 | Google)
McLaughlin, B. P. 1992. Systematicity, conceptual truth, and evolution. In Philosophy and the Cognitive Sciences. (Cited by 11 | Google)
Against responses to Fodor and Pylyshyn claiming that cognitive theories needn't explain systematicity. Contra Clark, the conceptual truth of systematicity won't help. Contra others, nor will evolution.McLaughlin, B. P. 1993. The connectionism/classicism battle to win souls. Philosophical Studies 71. (Cited by 15 | Google)
Argues that no connectionist model so far has come close to explaining systematicity. Considers the models of Elman, Chalmers, and Smolensky.Niklasson, L. F. & van Gelder, T. 1994. On being systematically connectionist. Mind and Language 9:288-302. (Cited by 31 | Google)
Pollack, J. B. 1990. Recursive distributed representations. Artificial Intelligence 46:77-105. (Cited by 352 | Google)
Develops a connectionist architecture -- recursive auto-associative memory -- that can recursively represent compositional structures in distributed form.Rowlands, M. 1994. Connectionism and the language of thought. British Journal for the Philosophy of Science 45:485-503. (Google)
F&P's argument confuses constituent structure with logical/sentential structure. Connectionism is a psychotechtonic project, whereas propositional description is a psychosemantic project.Schroder, J. 1998. Knowledge of rules, causal systematicity, and the language of thought. Synthese 117:313-330. (Google)
Smolensky, P. 1987. The constituent structure of connectionist mental states. Southern Journal of Philosophy Supplement 26:137-60. (Cited by 44 | Google)
F&P ignore distributed representation and interaction effects.Smolensky, P. 1990. Tensor product variable binding and the representation of symbolic structures in connectionist systems. Artificial Intelligence 46:159-216. (Cited by 170 | Google)
Develops a connectionist architecture that represents compositional structures as tensor products of distributed representations.Smolensky, P. 1991. Connectionism, constituency and the language of thought. In (B. Loewer & G. Rey, eds) Meaning in Mind: Fodor and his Critics. Blackwell. (Cited by 46 | Google)
Connectionism can do compositionality its own way, including both weak compositionality (with context effects) or strong compositionality (via tensor products).Smolensky, P. 1995. Constituent structure and explanation in an integrated connectionist/symbolic cognitive architecture. In (C. Macdonald, ed) Connectionism: Debates on Psychological Explanation. Blackwell. (Cited by 20 | Google)
van Gelder, T. 1990. Compositionality: A connectionist variation on a classical theme. Cognitive Science 14:355-84. (Cited by 123 | Google)
Connectionism can do compositionality functionally. All one needs is the right functional relation between representations; physical concatenation is not necessary.van Gelder, T. 1991. Classical questions, radical answers. In (T. Horgan & J. Tienson, eds) Connectionism and the Philosophy of Mind. Kluwer. (Cited by 16 | Google)
On connectionism as a Kuhnian paradigm shift in cognitive science, with emphasis on the implications of functional compositionality and distributed representations.
Butler, K. 1995. Representation and computation in a deflationary assessment of connectionist cognitive science. Synthese 104:71-97. (Google)
Clark, A. 1989. Connectionism, nonconceptual content, and representational redescription. Manuscript. (Google)
On some troubles connectionism has with higher-order knowledge. Contrasts Cussins, Karmiloff-Smith on development. Subsymbols without symbols are blind.Clark, A. 1993. Associative Engines: Connectionism, Concepts, and Representational Change. MIT Press. (Cited by 102 | Google)
Clark, A. & Karmiloff-Smith, A. 1994. The cognizer's innards: A psychological and philosophical perspective on the development of thought. Mind and Language 8:487-519. (Google)
On the importance of representational redescription, and on the limits of connectionist networks in cross-domain knowledge transfer. What does a true believer need, above behavior: conceptual combination, real-world fluency?Cummins, R. 1991. The role of representation in connectionist explanation of cognitive capacities. In (W. Ramsey, S. Stich, & D. Rumelhart, eds) Philosophy and Connectionist Theory. Lawrence Erlbaum. (Google)
Connectionism isn't really radical. There's no new concept of representation or of learning, and cognition can still be the manipulation of semantically structured representations.Cussins, A. 1990. The connectionist construction of concepts. In (M. Boden, ed) The Philosophy of AI. Oxford University Press. (Cited by 51 | Google)
Connectionism builds up concepts from the nonconceptual level. From nonconceptual content (e.g. perceptual experiences) to the emergence of objectivity.Garzon, F. 2000. A connectionist defence of the inscrutability thesis. Mind and Language 15:465-480. (Cited by 1 | Google)
Garzon, F. 2000. State space semantics and conceptual similarity: reply to Churchland. Philosophical Psychology 13:77-96. (Cited by 3 | Google)
Goschke, T. & Koppelberg, D. 1990. Connectionism and the semantic content of internal representation. Review of International Philosophy 44:87-103. (Google)
Goschke, T. & Koppelberg, D. 1991. The concept of representation and the representation of concepts in connectionist models. In (W. Ramsey, S. Stich, & D. Rumelhart, eds) Philosophy and Connectionist Theory. Lawrence Erlbaum. (Cited by 14 | Google)
On correlational semantics and context-dependent representations.Hatfield, G. 1991. Representation and rule-instantiation in connectionist systems. In (T. Horgan & J. Tienson, eds) Connectionism and the Philosophy of Mind. Kluwer. (Cited by 7 | Google)
Some remarks on psychology & physiology. Even connectionism uses psychological concepts.Hatfield, G. 1991. Representation in perception and cognition: Connectionist affordances. In (W. Ramsey, S. Stich, & D. Rumelhart, eds) Philosophy and Connectionist Theory. Lawrence Erlbaum. (Cited by 9 | Google)
Haybron, D. M. 2000. The causal and explanatory role of information stored in connectionist networks. Minds and Machines 10:361-380. (Cited by 2 | Google)
Laakso, A. & Cottrell, G. 2000. Content and cluster analysis: assessing representational similarity in neural systems. Philosophical Psychology 13:47-76. (Cited by 8 | Google)
O'Brien, G. & Opie, J. 2004. Notes toward a structuralist theory of mental representation. In (H. Clapin, ed) Representation in Mind. Elsevier. (Cited by 5 | Google)
Place, U. T. 1989. Toward a connectionist version of the causal theory of reference. Acta Analytica 4:71-97. (Google)
Ramsey, W. 1995. Rethinking distributed representation. Acta Analytica 10:9-25. (Google)
Ramsey, W. 1997. Do connectionist representations earn their explanatory keep? Mind and Language 12:34-66. (Cited by 13 | Google)
Argues that talk of representations has no explanatory role in connectionist theory, and can be discarded. It can't be understood along the lines of the teleo-informational or classical frameworks.Schopman, J. & Shawky, A. 1996. Remarks on the impact of connectionism on our thinking about concepts. In (P. Millican & A. Clark, eds) Machines and Thought. Oxford University Press. (Google)
Tye, M. 1987. Representation in pictorialism and connectionism. Southern Journal of Philosophy Supplement 26:163-184. (Google)
Pictorialism isn't compatible with language of thought, but connectionism might be.van Gelder, T. 1991. What is the D in PDP? In (W. Ramsey, S. Stich, & D. Rumelhart, eds) Philosophy and Connectionist Theory. Lawrence Erlbaum. (Google)
Argues that distributed representation is best analyzed in terms of superposition of representation, not in terms of extendedness.
Ramsey, W. , Stich, S. P. & Garon, J. 1991. Connectionism, eliminativism and the future of folk psychology. In (W. Ramsey, S. Stich, & D. Rumelhart, eds) Philosophy and Connectionist Theory. Lawrence Erlbaum. (Cited by 51 | Google)
Connectionism implies eliminativism, as connectionist systems do not have functionally discrete contentful states, and folk psychology is committed to functional discreteness of propositional attitudes.Bickle, J. 1993. Connectionism, eliminativism, and the semantic view of theories. Erkenntnis. (Cited by 3 | Google)
Outlines the semantic view of scientific theories, and applies it to the connectionism/eliminativism debate. There's no reason why folk psychology shouldn't be reducible, in a homogeneous or heterogeneous way.Botterill, G. 1994. Beliefs, functionally discrete states, and connectionist networks. British Journal for the Philosophy of Science 45:899-906. (Google)
Distinguishes active from dispositional beliefs: the former are realized discretely in activation patterns, the latter nondiscretely in weights, which is all that folk psychology needs.Clapin, H. 1991. Connectionism isn't magic. Minds and Machines 1:167-84. (Google)
Commentary on Ramsey/Stich/Garon. Connectionism has symbols that interact, and has propositional modularity in processing if not in storage.Clark, A. 1989. Beyond eliminativism. Mind and Language 4:251-79. (Cited by 4 | Google)
Connectionism needn't imply eliminativism, as higher levels may have a causal role, if not causal completeness. Also, it may not tell the whole story.Clark, A. 1990. Connectionist minds. Proceedings of the Aristotelian Society 90:83-102. (Cited by 10 | Google)
Responding to eliminativist challenge via cluster analysis and recurrence.Davies, M. 1989. Connectionism, modularity, and tacit knowledge. British Journal for the Philosophy of Science 40:541-55. (Cited by 9 | Google)
Davies, M. 1991. Concepts, connectionism, and the language of thought. (W. Ramsey, S. Stich, & D. Rumelhart, eds) Philosophy and Connectionist Theory. Lawrence Erlbaum. (Cited by 33 | Google)
Argues that our conception of thought requires causal systematicity, which requires a language of thought. Connectionist systems are not causally systematic, so connectionism leads to eliminativism.Egan, F. 1995. Folk psychology and cognitive architecture. Philosophy of Science 62:179-96. (Cited by 5 | Google)
Forster, M. & Saidel, E. 1994. Connectionism and the fate of folk psychology. Philosophical Psychology 7:437-52. (Google)
Contra Ramsey, Stich, and Garon, connectionist representations can be seen to be functionally discrete on an appropriate analysis of causal relevance.Horgan, T. , and Tienson, J. 1995. Connectionism and the commitments of folk psychology. Philosophical Perspectives 9:127-52. (Cited by 1 | Google)
O'Brien, G. 1991. Is connectionism commonsense? Philosophical Psychology 4:165-78. (Cited by 2 | Google)
O'Leary-Hawthorne, J. 1994. On the threat of eliminativism. Philosophical Studies 74:325-46. (Google)
A dispositional construal of beliefs and desires can distinguish the relevant active states (via counterfactuals) and is compatible with FP, so internals can't threaten FP. With remarks on Davidson, overdetermination, etc.Place, U. T. 1992. Eliminative connectionism: Its implications for a return to an empiricist/behaviorist linguistics. Behavior and Philosophy 20:21-35. (Google)
Ramsey, W. 1994. Distributed representation and causal modularity: A rejoinder to Forster and Saidel. Philosophical Psychology 7:453-61. (Google)
Upon examination, the model of Forster and Saidel 1994 does not exhibit features that are both distributed and causally discrete.Smolensky, P. 1995. On the projectable predicates of connectionist psychology: A case for belief. In (C. Macdonald, ed) Connectionism: Debates on Psychological Explanation. Blackwell. (Cited by 2 | Google)
Stich, S. & Warfield, T. 1995. Reply to Clark and Smolensky: Do connectionist minds have beliefs? In (C. Macdonald, ed) Connectionism: Debates on Psychological Explanation. Blackwell. (Cited by 3 | Google)
Adams, F. , Aizawa, K. & Fuller, G. 1992. Rules in programming languages and networks. In (J. Dinsmore, ed) The Symbolic and Connectionist Paradigms: Closing the Gap. Lawrence Erlbaum. (Cited by 1 | Google)
The distinction between programming languages and networks is neutral on rule-following, etc, so there's nothing really new about connectionism.Aizawa, K. 1994. Representations without rules, connectionism, and the syntactic argument. Synthese 101:465-92. (Google)
Bringsjord, S. 1991. Is the connectionist-logicist debate one of AI's wonderful red herrings? Journal of Theoretical and Experimental Artificial Intelligence 3:319-49. (Google)
A detailed analysis purporting to show that connectionism and "logicism" are compatible, as Turing machines can do everything a neural network can. Entertaining, but misunderstands subsymbolic processing.Broadbent, D. 1985. A question of levels: Comment on McClelland and Rumelhart. Journal of Experimental Psychology: General 114:189-92. (Cited by 19 | Google)
Distributed models are at the implementational, not computational, level.Chandrasekaran, B. , Goel, A. & Allemang, D. 1988. Connectionism and information-processing abstractions. AI Magazine 24-34. (Google)
Connectionism won't affect AI too much, as AI is concerned with the information-processing (task) level. With greater modularity, connectionism will look more like traditional AI.Corbi, J. E. 1993. Classical and connectionist models: Levels of description. Synthese 95:141-68. (Google)
Dawson, M. R. W. , Medler, D. A. , & Berkeley, I. S. N. 1997. PDP networks can provide models that are not mere implementations of classical theories. Philosophical Psychology 10:25-40. (Cited by 14 | Google)
Dennett, D. C. 1986. The logical geography of computational approaches: A view from the east pole. In (M. Brand & R. Harnish, eds) The Representation of Knowledge and Belief. University of Arizona Press. (Cited by 14 | Google)
Drawing the battle-lines: High Church Computationalism at the "East Pole", New Connectionism, Zen Holism, etc, at various locations on the "West Coast". With remarks on connectionism, and on AI as thought-experimentation.Dennett, D. C. 1991. Mother Nature versus the walking encyclopedia. In (W. Ramsey, S. Stich, & D. Rumelhart, eds) Philosophy and Connectionist Theory. Lawrence Erlbaum. (Cited by 16 | Google)
Reiterating the value of connectionism, especially biological plausibility.Dinsmore, J. (ed) 1992. The Symbolic and Connectionist Paradigms: Closing the Gap. Lawrence Erlbaum. (Cited by 11 | Google)
Dyer, M. 1991. Connectionism versus symbolism in high-level cognition. In (T. Horgan & J. Tienson, eds) Connectionism and the Philosophy of Mind. Kluwer. (Cited by 4 | Google)
Garson, J. W. 1991. What connectionists cannot do: The threat to Classical AI. In (T. Horgan & J. Tienson, eds) Connectionism and the Philosophy of Mind. Kluwer. (Cited by 1 | Google)
Connectionism and classicism aren't necessarily incompatible on symbolic discreteness, causal role, functional discreteness, constituency, representation of rules.Garson, J. W. 1994. No representations without rules: The prospects for a compromise between paradigms in cognitive science. Mind and Language 9:25-37. (Cited by 2 | Google)
Garson, J. W. 1994. Cognition without classical architecture. Synthese 100:291-306. (Cited by 5 | Google)
Guarini, M. 2001. A defence of connectionism against the "syntactic" argument. Synthese 128:287-317. (Google)
Horgan, T. & Tienson, J. 1987. Settling into a new paradigm. Southern Journal of Philosophy Supplement 26:97-113. (Cited by 7 | Google)
On connectionism, basketball, and representation without rules. Responses to the "syntactic" and "semantic" arguments against connectionism. Nice.Horgan, T. & Tienson, J. 1989. Representation without rules. Philosophical Perspectives 17:147-74. (Google)
Cognition uses structured representations without high-level rules, and connectionism is better at accounting for this. With remarks on exceptions to psychological laws, and the crisis in traditional AI.Horgan, T. & Tienson, J. 1994. Representations don't need rules: Reply to James Garson. Mind and Language 9:1-24. (Cited by 2 | Google)
McClelland, J. L. & Rumelhart, D. E. 1985. Levels indeed! A response to Broadbent. Journal of Experimental Psychology: General 114:193-7. (Google)
Response to Broadbent 1985: Distributed models are at the algorithmic level. Elucidating the low-level/high-level relation via various analogies.McLaughlin, B. P. & Warfield, F. 1994. The allure of connectionism reexamined. Synthese 101:365-400. (Google)
Argues that symbolic systems such as decision trees are as good at learning and pattern recognition as connectionist networks, and it is just as plausible that they are implemented in the brain.Rey, G. 1991. An explanatory budget for connectionism and eliminativism. In (T. Horgan & J. Tienson, eds) Connectionism and the Philosophy of Mind. Kluwer. (Cited by 8 | Google)
Challenges connectionism to explain things that the classical approach seems to handle better: the structure, systematicity, causal role, and grain of propositional attitudes, their rational relations, and conceptual stability.
Smolensky, P. 1988. On the proper treatment of connectionism. Behavioral and Brain Sciences 11:1-23. (Google)
Connectionism offers a complete account at the subsymbolic level, rather than an approximate account at the symbolic level.Berkeley, I. 2000. What the #$%! is a subsymbol? Minds and machines 10:1-14.
Chalmers, D. J. 1992. Subsymbolic computation and the Chinese Room. In (J. Dinsmore, ed) The Symbolic and Connectionist Paradigms: Closing the Gap. Lawrence Erlbaum. (Cited by 23 | Google)
Explicates the distinction between symbolic and subsymbolic computation, and argues that connectionism can better handle the emergence of semantics from syntax, doe to the non-atomic nature of its representations.Clark, A. 1993. Superpositional connectionism: A reply to Marinov. Minds and Machines 3:271-81. (Cited by 2 | Google)
Hofstadter, D. R. 1983. Artificial intelligence: Subcognition as computation. In (F. Machlup, ed) The Study of Information: Interdisciplinary Messages. Wiley. Reprinted as "Waking up from the Boolean dream" in Metamagical Themas. Basic Books. (Cited by 6 | Google)
AI needs statistical emergence. For real semantics, symbols must be decomposable, complex, autonomous -- i.e. active.Marinov, M. 1993. On the spuriousness of the symbolic/subsymbolic distinction. Minds and Machines 3:253-70. (Cited by 2 | Google)
Argues with Smolensky: symbolic systems such as decision trees have all the positive features of neural networks (flexibility, lack of brittleness), and can represent concepts as sets of subconcepts. With a reply by Clark.Rosenberg, J. 1990. Treating connectionism properly: Reflections on Smolensky. Psychological Research 52:163. (Cited by 3 | Google)
Rejects Smolensky's PTC, as the proper interaction of the microscopic and macroscopic levels would take a "miracle".Smolensky, P. 1987. Connectionist AI, symbolic AI, and the brain. AI Review 1:95-109. (Cited by 12 | Google)
On connectionist networks as subsymbolic dynamic systems.
Bechtel, W. 1985. Are the new PDP models of cognition cognitivist or associationist? Behaviorism 13:53-61. (Google)
Bechtel, W. 1986. What happens to accounts of mind-brain relations if we forgo an architecture of rules and representations? Philosophy of Science Association 1986, 159-71. (Google)
On the relationship between connectionism, symbol processing, psychology and neuroscience.Bechtel, W. 1987. Connectionism and the philosophy of mind. Southern Journal of Philosophy Supplement 26:17-41. Reprinted in (W. Lycan, ed) Mind and Cognition (Blackwell, 1990). (Cited by 8 | Google)
Lots of questions about connectionism.Bechtel, W. 1988. Connectionism and rules and representation systems: Are they compatible? Philosophical Psychology 1:5-16. (Cited by 5 | Google)
There's room for both styles within a single mind. The rule-based level needn't be autonomous; the connectionist level plays a role in pattern recognition, concepts, and so on.Bechtel, W. & Abrahamson, A. 1990. Beyond the exclusively propositional era. Synthese 82:223-53. (Cited by 5 | Google)
An account of the shift from propositions to pattern recognition in the study of cognition: knowing-how, imagery, categorization, connectionism.Bechtel, W. & Abrahamsen, A. A. 1992. Connectionism and the future of folk psychology. In (R. Burton, ed) Minds: Natural and Artificial. SUNY Press. (Cited by 2 | Google)
Bechtel, W. 1993. The case for connectionism. Philosophical Studies 71:119-54. (Cited by 3 | Google)
Bickle, J. 1995. Connectionism, reduction, and multiple realizability. Behavior and Philosophy 23:29-39. (Cited by 2 | Google)
Bradshaw, D. E. 1991. Connectionism and the specter of representationalism. In (T. Horgan & J. Tienson, eds) Connectionism and the Philosophy of Mind. Kluwer. (Cited by 4 | Google)
Argues that connectionism allows for a more plausible epistemology of perception, compatible with direct realism rather than representationalism. With remarks on Fodor and Pylshyn's argument against Gibson.Churchland, P. M. 1989. On the nature of theories: A neurocomputational perspective. Minnesota Studies in the Philosophy of Science 14. Reprinted in A Neurocomputational Perspective (MIT Press, 1989). (Cited by 16 | Google)
Connectionism will revolutionize our review of scientific theories: >From the deductive-nomological view to descent in weight-space. Some cute analogies.Churchland, P. M. 1989. On the nature of explanation: A PDP approach. In A Neurocomputational Perspective. MIT Press. (Cited by 4 | Google)
We achieve explanatory understanding not through the manipulation of propositions but through the activation of prototypes.Churchland, P. S. & Sejnowski, T. 1989. Neural representation and neural computation. In (L. Nadel, ed) Neural Connections, Mental Computations. MIT Press. (Cited by 25 | Google)
Implications of connectionism and neuroscience for our concept of mind.Clark, A. 1989. Microcognition. MIT Press. (Cited by 90 | Google)
All kinds of stuff on connectionism and philosophy.Clark, A. 1990. Connectionism, competence and explanation. British Journal for the Philosophy of Science 41:195-222. (Cited by 21 | Google)
Connectionism separates processing from competence. Instead of hopping down Marr's levels (theory->process), connectionism goes (1) task (2) low-level performance (3) extract theory from process. Cute.Cummins, R. & Schwarz, G. 1987. Radical connectionism. Southern Journal of Philosophy Supplement 26:43-61. (Cited by 7 | Google)
On computation and representation in AI and connectionism, and on problems for radical connectionism in reconciling these without denying representation or embracing mystery.Cummins, R. & Schwarz, G. 1991. Connectionism, computation, and cognition. In (T. Horgan & J. Tienson, eds) Connectionism and the Philosophy of Mind. Kluwer. (Cited by 21 | Google)
Explicates computationalism, and discusses ways in which connectionism might end up non-computational: if causal states cross-classify representational states, or if transitions between representations aren't computable.Cummins, R. 1995. Connectionist and the rationale constraint on cognitive explanations. Philosophical Perspectives 9:105-25. (Google)
Davies, M. 1989. Connectionism, modularity and tacit knowledge. British Journal for the Philosophy of Science 40:541-55. (Cited by 9 | Google)
Argues that connectionist networks don't have tacit knowledge of modular theories (as representations lack the appropriate structure, etc.).Globus, G. G. 1992. Derrida and connectionism: Differance in neural nets. Philosophical Psychology 5:183-97. (Google)
Hatfield, G. 1990. Gibsonian representations and connectionist symbol-processing: prospects for unification. Psychological Research 52:243-52. (Google)
Gibson is compatible with connectionism. In both, we can have rule-instantiation without rule-following.Horgan, T. & Tienson, J. (eds) 1991. Connectionism and the Philosophy of Mind. Kluwer. (Cited by 18 | Google)
Horgan, T. & Tienson, J. 1996. Connectionism and the Philosophy of Psychology. MIT Press. (Cited by 90 | Google)
Horgan, T. 1997. Connectionism and the philosophical foundations of cognitive science. Metaphilosophy 28:1-30. (Cited by 6 | Google)
Humphreys, G. W. 1986. Information-processing systems which embody computational rules: The connectionist approach. Mind and Language 1:201-12. (Cited by 2 | Google)
Legg, C. R. 1988. Connectionism and physiological psychology: A marriage made in heaven? Philosophical Psychology 1:263-78. (Google)
Litch, M. 1997. Computation, connectionism and modelling the mind. Philosophical Psychology 10:357-364. (Google)
Lloyd, D. 1989. Parallel distributed processing and cognition: Only connect? In Simple Minds. MIT Press. (Google)
An overview: local/distributed/featural representations; explanation in connectionism (how to avoid big mush); relation to neuroscience; explicit representations of rules vs weight matrices.Lycan, W. G. 1991. Homuncular functionalism meets PDP. In (W. Ramsey, S. Stich, & D. Rumelhart, eds) Philosophy and Connectionist Theory. Lawrence Erlbaum. (Cited by 2 | Google)
On various ways in which connectionism relates to representational homuncular functionalism, e.g. on implementation, eliminativism, and explanation.Macdonald, C. 1995. Connectionism: Debates on Psychological Explanation. Blackwell. (Cited by 21 | Google)
Plunkett, K. 2001. Connectionism today. Synthese 129:185-194. (Cited by 1 | Google)
Ramsey, W. & Stich, S. P. 1990. Connectionism and three levels of nativism. Synthese 82:177-205. (Cited by 7 | Google)
How connectionism bears on the nativism debate. Conclusion: not too much.Ramsey, W. , Stich, S. P. & Rumelhart, D. M. (eds) 1991. Philosophy and Connectionist Theory. Lawrence Erlbaum. (Cited by 27 | Google)
Rosenberg, J. 1989. Connectionism and cognition. Bielefeld Report. (Cited by 5 | Google)
Criticism of Churchland's connectionist epistemology.Sehon, S. 1998. Connectionism and the causal theory of action explanation. Philosophical Psychology 11:511-532. (Google)
Shanon, B. 1992. Are connectionist models cognitive? Philosophical Psychology. (Google)
In some senses of "cognitive", yes; in other senses, no. Phenomenological, theoretical, and sociological perspectives. Toward meaning-laden models.Sterelny, K. 1990. Connectionism. In The Representational Theory of Mind. Blackwell. (Google)
Waskan, J. & Bechtel, W. 1997. Directions in connectionist research: Tractable computations without syntactically structured representations. Metaphilosophy 28:31-62. (Google)
Clark, A. 1994. Representational trajectories in connectionist learning. Minds and Machines 4:317-32. (Cited by 3 | Google)
On how to get connectionist networks to learn about structured task domains. Concentrates on incremental learning, and other developmental/scaffolding strategies. With remarks on systematicity.Clark, A. & Thornton, S. 1997. Trading spaces: Computation, representation, and the limits of uninformed learning. Behavioral and Brain Sciences 20:57-66. (Cited by 93 | Google)
Cliff, D. 1990. Computational neuroethology: A provisional manifesto. Manuscript. (Cited by 82 | Google)
Criticizes connectionism for not being sufficiently rooted in neuroscience, and for not being grounded in the world.Dawson, M. R. W. & Schopflocher, D. P. 1992. Autonomous processing in parallel distributed processing networks. Philosophical Psychology 5:199-219. (Google)
Hanson, S. & Burr, D. 1990. What connectionist models learn. Behavioral and Brain Sciences. (Cited by 62 | Google)
What's new to connectionism is not learning or representation but the way that learning and representation interact.Kaplan, S. , Weaver, M. & French, R. M. 1990. Active symbols and internal models: Towards a cognitive connectionism. AI and Society. (Cited by 13 | Google)
Addresses behaviorist/associationist charges. Connectionism needs recurrent circuits to support active symbols.Kirsh, D. 1987. Putting a price on cognition. Southern Journal of Philosophy Supplement 26:119-35. (Cited by 4 | Google)
On the importance of variable binding, and why it's hard with connectionism.Lachter, J. & Bever, T. 1988. The relation between linguistic structure and associative theories of language learning. Cognition 28:195-247. (Cited by 41 | Google)
Criticism of connectionist language models. They build in too much.Mills, S. 1989. Connectionism, the classical theory of cognition, and the hundred step constraint. Acta Analytica 4:5-38. (Google)
Nelson, R. 1989. Philosophical issues in Edelman's neural darwinism. Journal of Experimental and Theoretical Artificial Intelligence 1:195-208. (Google)
On the relationship between ND, PDP and AI. All are computational.Oaksford, M. , Chater, N. & Stenning, K. 1990. Connectionism, classical cognitive science and experimental psychology. AI and Society. (Cited by 6 | Google)
Connectionism is better at explaining empirical findings about mind.Pinker, S. & Prince, A. 1988. On language and connectionism. Cognition 28:73-193. (Cited by 349 | Google)
Extremely thorough criticism of the R&M past-tense-learning model, with arguments on why connectionism can't handle linguistic rules.
Bechtel, W. 1996. Yet another revolution? Defusing the dynamical system theorists' attack on mental representations. Manuscript. (Cited by 2 | Google)
Clark, A. 1998. Time and mind. Journal of Philosophy 95:354-76. (Google)
Eliasmith, C. 1996. The third contender: A critical examination of the dynamicist theory of cognition. Philosophical Psychology 9:441-63. (Cited by 24 | Google)
Eliasmith, C. 1997. Computation and dynamical models of mind. Minds and Machines 7:531-41. (Cited by 5 | Google)
Eliasmith, C. 2003. Moving beyond metaphors: Understanding the mind for what it is. Journal of Philosophy 100:493-520. (Cited by 5 | Google)
Foss, J. E. 1992. Introduction to the epistemology of the brain: Indeterminacy, micro-specificity, chaos, and openness. Topoi 11:45-57. (Cited by 4 | Google)
On the brain as a vector-processing system, and the problems raised by indeterminacy, chaos, and so on. With morals for cognitive science.Freeman, W. 1997. Nonlinear neurodynamics of intentionality. Journal of Mind and Behavior 18:291-304. (Google)
Garson, J. W. 1995. Chaos and free will. Philosophical Psychology 8:365-74. (Cited by 6 | Google)
Garson, J. W. 1996. Cognition poised at the edge of chaos: A complex alternative to a symbolic mind. Philosophical Psychology 9:301-22. (Cited by 13 | Google)
Garson, J. W. 1997. Syntax in a dynamic brain. Synthese 110:343-355. (Cited by 6 | Google)
Garson, J. W. 1998. Chaotic emergence and the language of thought. Philosophical Psychology 11:303-315. (Cited by 4 | Google)
Giunti, M. 1995. Dynamic models of cognition. In (T. van Gelder & R. Port, eds) Mind as Motion. MIT Press. (Google)
Giunti, M. 1996. Computers, Dynamical Systems, and the Mind. Oxford University Press. (Google)
Globus, G. 1992. Toward a noncomputational cognitive science. Journal of Cognitive Neuroscience 4:299-310. (Google)
Hooker, C. A. & Christensen, W. D. 1998. Towards a new science of the mind: Wide content and the metaphysics of organizational properties in nonlinear dynamic models. Mind and Language 13:98-109. (Cited by 4 | Google)
Horgan, T. & Tienson, J. 1992. Cognitive systems as dynamic systems. Topoi 11:27-43. (Google)
Horgan, T. & Tienson, J. 1994. A nonclassical framework for cognitive science. Synthese 101:305-45. (Google)
Keijzer, F. A. & Bem, S. 1996. Behavioral systems interpreted as autonomous agents and as coupled dynamical systems: A criticism. Philosophical Psychology 9:323-46. (Google)
Rockwell, T. 2005. Attractor spaces as modules: A semi-eliminative reduction of symbolic AI to dynamic systems theory. Minds and Machines 15:23-55. (Google)
Schonbein, W. 2005. Cognition and the power of continuous dynamical systems. Minds and Machines 15:57-71. (Google)
Sloman, A. 1993. The mind as a control system. In (C. Hookway & D. Peterson, eds) Philosophy and Cognitive Science. Cambridge University Press. (Cited by 59 | Google)
van Gelder, T. & Port, R. 1995. Mind as Motion: Explorations in the Dynamics of Cognition. MIT Press. (Cited by 208 | Google)
van Gelder, T. 1995. What might cognition be if not computation? Journal of Philosophy 92:345-81. (Cited by 142 | Google)
Argues for a dynamic-systems conception of the mind that is non-computational and non-representational. Uses an analogy with the Watt steam governor to argue for a new kind of dynamic explanation.van Gelder, T. 1997. Connectionism, dynamics, and the philosophy of mind. In (M. Carrier & P. Machamer, eds) Mindscapes: Philosophy, Science, and the Mind. Pittsburgh University Press. (Cited by 3 | Google)
van Gelder, T. 1998. The dynamical hypothesis in cognitive science. Behavioral and Brain Sciences 21:615-28. (Cited by 176 | Google)
Weiskopf, D. 2004. The place of time in cognition. The British Journal for the Philosophy of Science 55:87-105. (Cited by 1 | Google)
Buchanan, B. 1988. AI as an experimental science. In (J. Fetzer, ed) Aspects of AI. D. Reidel. (Google)
Bundy, A. 1990. What kind of field is AI? In (D. Partridge & Y. Wilks, eds) The Foundations of Artificial Intelligence: A Sourcebook. Cambridge University Press. (Cited by 3 | Google)
Dennett, D. C. 1978. AI as philosophy and as psychology. In (M. Ringle, ed) Philosophical Perspectives on Artificial Intelligence. Humanities Press. Reprinted in Brainstorms (MIT Press, 1978). (Google)
AI as detailed armchair psychology and as thought-experimental epistemology. Implications for mind: e.g. a solution to the problem of homuncular regress.Glymour, C. 1988. AI is philosophy. In (J. Fetzer, ed) Aspects of AI. D. Reidel. (Cited by 1 | Google)
Kukla, A. 1989. Is AI an empirical science? Analysis 49:56-60. (Cited by 6 | Google)
No, AI is an a priori science that uses empirical methods; its status is similar to that of mathematics.Kukla, A. 1994. Medium AI and experimental science. Philosophical Psychology 7:493-5012. (Cited by 4 | Google)
On the status of "medium AI", the study of intelligence in computational systems (not just humans). Contra to many, this is not an empirical science, but a combination of (experimental) mathematics and engineering.Nakashima, H. 1999. AI as complex information processing. Minds and Machines 9:57-80. (Google)
Bechtel, W. 1994. Levels of description and explanation in cognitive science. Minds and Machines 4:1-25. (Cited by 7 | Google)
Cleeremans, A. & French, R. M. 1996. From chicken squawking to cognition: Levels of description and the computational approach in psychology. Psychologica Belgica 36:5-29. (Cited by 5 | Google)
Foster, C. 1990. Algorithms, abstraction and implementation. Academic Press. (Cited by 7 | Google)
Outlines a theory of the equivalence of algorithms.Horgan, T. & Tienson, J. 1992. Levels of description in nonclassical cognitive science. Philosophy 34, Supplement. (Cited by 2 | Google)
Generalizes Marr's levels to: cognitive state-transitions, mathematical state-transitions, implementation. Discusses these with respect to connectionism, dynamical systems, and computation below the cognitive level.Houng, Y. 1990. Classicism, connectionism and the concept of level. Dissertation, Indiana University. (Google)
On levels of organization vs. levels of analysis.Marr, D. 1982. Vision. Freeman. (Cited by 2026 | Google)
Defines computational, algorithmic and implementational levels.McClamrock, R. 1990. Marr's three levels: a re-evaluation. Minds and Machines 1:185-196. (Google)
On different kinds of level-shifts: organizational and contextual changes. There are more than three levels available.Newell, A. 1982. The knowledge level. Artificial Intelligence 18:81-132. (Cited by 910 | Google)
Newell, A. 1986. The symbol level and The knowledge level. In (Z. Pylyshyn & W. Demopolous, eds) Meaning and Cognitive Structure. Ablex.
With commentaries by Smith, Dennett.Peacocke, C. 1986. Explanation in computational psychology: Language, perception and level 1.5. Mind and Language 1:101-23.
Psychological explanation is typically somewhere between the computational and algorithmic levels.Sticklen, J. 1989. Problem-solving architectures at the knowledge level. Journal of Experimental and Theoretical Artificial Intelligence 1:233-247. (Google)
Dennett, D. C. 1984. Cognitive wheels: The frame problem of AI. In (Hookaway, ed) Minds, Machines and Evolution. Cambridge University Press. (Cited by 88 | Google)
General overview.Dreyfus, H. L. & Dreyfus, S. 1987. How to stop worrying about the frame problem even though it's computationally insoluble. In (Z. Pylyshyn, ed) The Robot's Dilemma. Ablex. (Google)
FP is an artifact of computational explicitness. Contrast human commonsense know-how, with relevance built in. Comparison to expert/novice distinction.Fetzer, J. H. 1990. The frame problem: Artificial intelligence meets David Hume. International Journal of Expert Systems 3:219-232. (Cited by 13 | Google)
Fodor, J. A. 1987. Modules, frames, fridgeons, sleeping dogs, and the music of the spheres. In (Z. Pylyshyn, ed) The Robot's Dilemma. Ablex. (Cited by 38 | Google)
FP is Hamlet's problem: when to stop thinking. It's equivalent to the general problem of non-demonstrative inference.Haugeland, J. 1987. An overview of the frame problem. In (Z. Pylyshyn, ed) The Robot's Dilemma. Ablex. (Cited by 14 | Google)
The FP may be a consequence of the explicit/implicit rep distinction. Use "complicit" reps instead, and changes will be carried along intrinsically.Hayes, P. 1987. What the frame problem is and isn't. In (Z. Pylyshyn, ed) The Robot's Dilemma. Ablex. (Cited by 21 | Google)
FP is a relatively narrow problem, Some, e.g. Fodor, wrongly equate FP with the "Generalized AI Problem".Janlert, L. 1987. Modeling change: The frame problem. In (Z. Pylyshyn, ed) The Robot's Dilemma. Ablex. (Google)
Korb, K. 1998. The frame problem: An AI fairy tale. Minds and Machines 8:317-351. (Cited by 1 | Google)
Lormand, E. 1994. The holorobophobe's dilemma. In (K. Ford & Z. Pylylshyn, eds) The Robot's Dilemma Revisited. Ablex. (Google)
Lormand, E. 1990. Framing the frame problem. Synthese 82:353-74. (Cited by 4 | Google)
Criticizes Dennett's, Haugeland's and Fodor's construals of the FP.Maloney, J. C. 1988. In praise of narrow minds. In (J. Fetzer, ed) Aspects of AI. D. Reidel. (Google)
McCarthy, J. & Hayes, P. 1969. Some philosophical problems from the standpoint of artificial intelligence. In (Meltzer & Michie, eds) Machine Intelligence 4. Edinburgh University Press. (Cited by 1213 | Google)
McDermott, D. 1987. We've been framed: Or, Why AI is innocent of the frame problem. In (Z. Pylyshyn, ed) The Robot's Dilemma. Ablex. (Google)
Solve frame problem by using the sleeping-dog strategy -- keeping things fixed unless there's a reason to suppose otherwise.Murphy, D. 2001. Folk psychology meets the frame problem. Studies in History and Philosophy of Modern Physics 32C:565-573. (Google)
Pollock, JL. 1997. Reasoning about change and persistence: A solution to the frame problem. Nous 31:143-169. (Cited by 3 | Google)
Pylyshyn, Z. W. (ed) 1987. The Robot's Dilemma. Ablex. (Google)
Lots of papers on the frame problem.
Birnbaum, L. 1991. Rigor mortis: A response to Nilsson's `Logic and artificial intelligence'. Artificial Intelligence 47:57-78. (Cited by 71 | Google)
Chalmers, D. J. , French, R. M. & Hofstadter, D. R. 1992. High-level perception, representation, and analogy: A critique of AI methodology. Journal of Experimental and Theoretical Artificial Intelligence. (Cited by 47 | Google)
AI must integrate perception and cognition in the interest of flexible representation. Current models ignore perception and the development of representation, but this cannot be separated from later cognitive processes.Clark, A. 1986. A biological metaphor. Mind and Language 1:45-64. (Cited by 4 | Google)
AI should look at biology.Clark, A. 1987. The kludge in the machine. Mind and Language 2:277-300. (Cited by 7 | Google)
Dascal, M. 1992. Why does language matter to artificial intelligence? Minds and Machines 2:145-174. (Cited by 4 | Google)
Dreyfus, H. L. 1981. From micro-worlds to knowledge: AI at an impasse. In (J. Haugeland, ed) Mind Design. MIT Press. (Google)
Micro-worlds don't test true understanding, and frames and scripts are doomed to leave out too much. Explicit representation can't capture intelligence.Dreyfus, H. L. & Dreyfus, S. E. 1988. Making a mind versus modeling the brain: AI at a crossroads. Daedalus. (Cited by 65 | Google)
History of AI (boo) and connectionism (qualified hooray). And Husserl/ Heidegger/Wittgenstein. Quite nice.Hadley, R. F. 1991. The many uses of `belief' in AI. Minds and Machines 1:55-74. (Google)
Various AI approaches to belief: syntactic, propositional/meaning-based, information, tractability, discoverability, and degree of confidence.Haugeland, J. 1979. Understanding natural language. Journal of Philosophy 76:619-32. Reprinted in (W. Lycan, ed) Mind and Cognition (Blackwell, 1990). (Cited by 7 | Google)
AI will need holism: interpretational, common-sense, situational, existential.Kirsh, D. 1991. Foundations of AI: The big issues. Artificial Intelligence 47:3-30. (Cited by 24 | Google)
Identifying the dividing lines: pre-eminence of knowledge, embodiment, language-like kinematics, role of learning, uniformity of architecture.Marr, D. 1977. Artificial intelligence: A personal view. Artificial Intelligence 9:37-48. (Cited by 67 | Google)
AI usually comes up with Type 2 (algorithmic) theories, when Type 1 (info processing) theories might be more useful -- at least if they exist.McDermott, D. 1981. Artificial intelligence meets natural stupidity. In (J. Haugeland, ed) Mind Design. MIT Press. (Cited by 73 | Google)
Problems in AI methodology: wishful mnemonics, oversimplifying natural language concepts, and never implementing programs. Entertaining.McDermott, D. 1987. A critique of pure reason. Computational Intelligence 3:151-60. (Cited by 102 | Google)
Criticism of logicism (i.e. reliance on deduction) in AI.Nilsson, N. 1991. Logic and artificial intelligence. Artificial Intelligence 47:31-56. (Google)
Partridge, D. & Wilks, Y. (eds) 1990. The Foundations of Artificial Intelligence: A Sourcebook. Cambridge University Press. (Cited by 7 | Google)
Lots of papers on various aspects of AI methodology. Quite thorough.Preston, B. 1993. Heidegger and artificial intelligence. Philosophy and Phenomenological Research 53:43-69. (Google)
On the non-represented background to everyday activity, and environmental interaction in cognition. Criticizes cognitivism, connectionism, looks at Agre/Chapman/Brooks, ethology, anthropology for support.Pylyshyn, Z. W. 1979. Complexity and the study of artificial and human intelligence. In (M. Ringle, ed) Philosophical Perspectives in Artificial Intelligence. Humanities Press. (Cited by 7 | Google)
Ringle, M. (ed) 1979. Philosophical Perspectives in Artificial Intelligence. Humanities Press. (Cited by 9 | Google)
10 papers on philosophy of AI, psychology and knowledge representation.Robinson, W. S. 1991. Rationalism, expertise, and the Dreyfuses' critique of AI research. Southern Journal of Philosophy 29:271-90. (Google)
Defending limited rationalism: i.e. a theory of intelligence below the conceptual level but above the neuronal level.
Agre, P. 2002. The practical logic of computer work. In (M. Scheutz, ed) Computationalism: New Directions. MIT Press. (Google)
Antony, L. 1997. Feeling fine about the mind. Philosophy and Phenomenological Research 57:381-87. (Cited by 1 | Google)
Bickhard, M. 1996. Troubles with computationalism. In (W. O'Donahue & R. Kitchener, eds) The Philosophy of Psychology. Sage Publications. (Cited by 12 | Google)
Block, N. 1990. The computer model of mind. In (D. Osherson & E. Smith, eds) An Invitation to Cognitive Science, Vol. 3. MIT Press. (Cited by 18 | Google)
Overview of computationalism. Relationship to intentionality, LOT, etc.Boden, M. 1984. What is computational psychology? Proceedings of the Aristotelian Society 58:17-35. (Google)
Bringsjord, S. 1994. Computation, among other things, is beneath us. Minds and Machines 4:469-88. (Cited by 11 | Google)
Bringsjord, S. & Zenzen, M. 1997. Cognition is not computation: The argument from irreversibility. Synthese 113:285-320.
Buller, D. J. 1993. Confirmation and the computational paradigm, or, why do you think they call it artificial intelligence? Minds and Machines 3:155-81. (Cited by 1 | Google)
Chalmers, D. J. 1994. A computational foundation for the study of cognition. Manuscript. (Cited by 26 | Google)
Argues for theses of computational sufficiency and computational explanation, resting on the fact that computation provides an abstract specification of causal organization. With replies to many anti-computationalist worries.Clarke, J. 1972. Turing machines and the mind-body problem. British Journal for the Philosophy of Science 23:1-12. (Google)
Copeland, J. 2002. Narrow versus wide mechanism. In (M. Scheutz, ed) Computationalism: New Directions. MIT Press. (Cited by 23 | Google)
Cummins, R. 1977. Programs in the explanation of behavior. Philosophy of Science 44:269-87. (Cited by 9 | Google)
Demopoulos, W. 1987. On some fundamental distinctions of computationalism. Synthese 70:79-96. (Cited by 6 | Google)
On analog/digital, representational/nonrepresentational, direct/indirect.Dietrich, E. 1990. Computationalism. Social Epistemology. (Cited by 49 | Google)
What computationalism is, as opposed to computerism & cognitivism. Implies: intentionality isn't special, and we don't make decisions. With commentary.Dietrich, E. 1989. Semantics and the computational paradigm in computational psychology. Synthese 79:119-41. (Cited by 10 | Google)
Argues that computational explanation requires the attribution of semantic content. Addresses Stich's arguments against content, and argues that computers are not formal symbol manipulators.Double, R. 1987. The computational model of the mind and philosophical functionalism. Behaviorism 15:131-39. (Google)
Dretske, F. 1985. Machines and the mental. Proceedings and Addresses of the American Philosophical Association 59:23-33. (Cited by 16 | Google)
Machines can't even add, let alone think, as the symbols they use aren't meaningful to them. They would need real information based on perceptual embodiment, and conceptual capacities, for meaning to play a real role.Fetzer, J. H. 1994. Mental algorithms: Are minds computational systems? Pragmatics and Cognition 21:1-29. (Cited by 18 | Google)
Fodor, J. A. 1978. Computation and reduction. Minnesota Studies in the Philosophy of Science 9. Reprinted in RePresentations (MIT Press, 1980). (Cited by 5 | Google)
Fodor, J. 2000. The Mind Doesn't Work That Way: The Scope and Limits of Computational Psychology. MIT Press. (Cited by 210 | Google)
Garson, J. W. 1993. Mice in mirrored mazes and the mind. Philosophical Psychology 6:123-34. (Google)
Computationalism is false, as it can't distinguish the ability to solve a maxe for the ability to solve it's mirror image. An embodied computational taxonomy is needed, rather than software alone.Harnad, S. 1994. Computation is just interpretable symbol manipulation; Cognition isn't. Minds and Machines 4:379-90. (Cited by 19 | Google)
Haugeland, J. 2002. Authentic intentionality. In (M. Scheutz, ed) Computationalism: New Directions. MIT Press. (Google)
Horst, S. 1996. Symbols, Computation, and Intentionality: A Critique of the Computational Theory of Mind. University of California Press. (Cited by 12 | Google)
Horst, S. 1999. Symbols and computation: A critique of the computational theory of mind. Minds and Machines 9:347-381 (Cited by 1 | Google)
Mellor, D. H. 1984. What is computational psychology? II. Proceedings of the Aristotelian Society 58:37-53. (Google)
Mellor, D. H. 1989. How much of the mind is a computer. In (P. Slezak, ed) Computers, Brains and Minds. Kluwer. (Cited by 2 | Google)
Only belief is computational: rest of mind is not.Nelson, R. 1987. Machine models for cognitive science. Philosophy of Science (Google)
Argues contra Pylyshyn1984 that finite state automata are good models for cognitive science: they are semantically interpretable and process symbols. Piccinini, G. 2004. Functionalism, computationalism, and mental contents. Canadian Journal of Philosophy. (Cited by 6 | Google)
Piccinini, G. 2004. Functionalism, computationalism, and mental states. Studies in History and Philosophy of Science 35. (Cited by 5 | Google)
Pollock, J. 1989. How to Build a Person: A Prolegomenon. MIT Press. (Cited by 28 | Google)
Pylyshyn, Z. W. 1980. Computation and cognition: Issues in the foundation of cognitive science. Behavioral and Brain Sciences 3:111-32. (Cited by 522 | Google)
Pylyshyn, Z. W. 1984. Computation and Cognition. MIT Press. (Cited by 522 | Google)
A thorough account of the symbolic/computational view of cognition.Pylyshyn, Z. W. 1978. Computational models and empirical constraints. Behavioral and Brain Sciences 1:98-128. (Cited by 11 | Google)
Pylyshyn, Z. W. 1989. Computing and cognitive science. In (M. Posner, ed) Foundations of Cognitive Science. MIT Press. (Cited by 27 | Google)
An overview of the computational view of mind. On symbols, levels, control structures, levels of correspondence for computational models, and empirical methods for determining degrees of equivalence.Scheutz, M. (ed) 2002. Computationalism: New Directions. MIT Press. Scheutz, M. 2002. Computationalism: The next generation. In (M. Scheutz, ed) Computationalism: New Directions. MIT Press. (Cited by 2 | Google)
Shapiro, S. C. 1995. Computationalism. Minds and Machines 5:467-87. (Cited by 9 | Google)
Smith, B. C. 2002. The foundations of computing. In (M. Scheutz, ed) Computationalism: New Directions. MIT Press. (Cited by 7 | Google)
Sterelny, K. 1989. Computational functional psychology: problems and prospects. In (P. Slezak, ed) Computers, Brains and Minds. Kluwer. (Google)
Various points on pros and cons of computational psychology.Tibbetts, P. 1996. Residual dualism in computational theories of mind. Dialectica 50:37-52. (Google)
Bishop, M. 2002. Counterfactuals cannot count: A rejoinder to David Chalmers. Consciousness and Cognition 11:642-52. (Google)
Bishop, M. 2003. Dancing with pixies: Strong artificial intelligence and panpsychism. In (J. Preston & M. Bishop, eds) Views into the Chinese Room: New Essays on Searle and Artificial Intelligence. Oxford University Press. (Google)
Boyle, C. F. 1994. Computation as an intrinsic property. Minds and Machines 4:451-67. (Cited by 1 | Google)
Chalmers, D. J. 1994. On implementing a computation. Minds and Machines 4:391-402. (Cited by 15 | Google)
Gives an account of what it is for a physical system to implement a computation: the causal structure of the system must mirror the formal structure of the computation. Answers objections by Searle and others.Chalmers, D. J. 1996. Does a rock implement every finite-state automaton? Synthese 108:309-33. (Cited by 22 | Google)
Argues that Putnam's "proof" that every ordinary open system implements every finite automaton is fallacious. It can be patched up, but an appropriate account of implementation resists these difficulties.Chrisley, R. L. 1994. Why everything doesn't realize every computation. Minds and Machines 4:403-20. (Cited by 10 | Google)
Cleland, C. 1993. Is the Church-Turing thesis true? Minds and Machines 3:283-312. (Cited by 24 | Google)
Many physically realized functions can't be computeted by Turing machines: e.g. "mundane procedures" and continuous functions. So the C-T thesis is false of these, and maybe even of number-theoretic functions.Cleland, C. E. 1995. Effective procedures and computable functions. Minds and Machines 5:9-23. (Cited by 16 | Google)
Copeland, B. J. 1996. What is computation? Synthese 108:335-59. (Cited by 24 | Google)
Endicott, R. P. 1996. Searle, syntax, and observer-relativity. Canadian Journal of Philosophy 26:101-22. (Google)
Goel, V. 1991. Notationality and the information processing mind. Minds and Machines 1:129-166. (Cited by 7 | Google)
Adapts Goodman's notational systems to explicate computational information processing. What is/isn't a physical notational system (e.g. LOT, symbol systems, connectionism) and why. How to reconcile notational/mental content?Hardcastle, V. G. 1995. Computationalism. Synthese 105:303-17. (Cited by 5 | Google)
Pragmatic factors are vital in connecting the theory of computation with empirical theory, and particularly in determining whether a given system counts as performing a given computation.Haugeland, J. 2003. Syntax, semantics, physics. In (J. Preston & M. Bishop, eds) Views into the Chinese Room: New Essays on Searle and Artificial Intelligence. Oxford University Press. (Cited by 4 | Google)
Horsten, L. 1995. The Church-Turing thesis and effective mundane procedures. Minds and Machines 5:1-8. (Cited by 4 | Google)
Ingarden, R. 2002. Open systems and consciousness: A philosophical discussion. Open Systems & Information Dynamics 9:125-151. (Google)
MacLennan, B. 1994. "Words lie in our way". Minds and Machines 4:421-37. (Google)
Maclennan, B. 2003. Transcending Turing computability. Minds and Machines 13:3-22. (Cited by 3 | Google)
Miscevic, N. 1996. Computationalism and the Kripke-Wittgenstein paradox. Proceedings of the Aristotelian Society 96:215-29. (Cited by 1 | Google)
Scheutz, M. 1999. When physical systems realize functions. Minds and Machines 9:161-196. (Cited by 12 | Google)
Searle, J. R. 1990. Is the brain a digital computer? Proceedings and Addresses of the American Philosophical Association 64:21-37. (Cited by 34 | Google)
Syntax isn't intrinsic to physics, so computational ascriptions are assigned by observer. Syntax has no causal powers. Brain doesn't process information.Shagrir, O. 1997. Two dogmas of computationalism. Minds and Machines 7:321-44. (Cited by 9 | Google)
Stabler, E. 1987. Kripke on functionalism and automata. Synthese 70:1-22. (Cited by 1 | Google)
Disputes Kripke's argument that there is no objective way of determining when a system computes a given function, due to infinite domains and unreliability. Stipulating normal background conditions is sufficient.Suber, P. 1988. What is software? Journal of Speculative Philosophy 2:89-119. (Cited by 1 | Google)
Welch, P. D. 2004. On the possibility, or otherwise, of hypercomputation. British Journal for the Philosophy of Science 55. (Cited by 2 | Google)
Bergadano, F. 1993. Machine learning and the foundations of inductive inference. Minds and Machines 3:31-51. (Google)
Beavers, A. F. 2002. Phenomenology and artificial intelligence. Metaphilosophy 33:70-82. (Cited by 2 | Google)
Button, G. , Coulter, J. , Lee, J. R. E. & Sharrock, W. 1995. Computers, Minds, and Conduct. Polity Press. (Cited by 28 | Google)
Clark, A. 2002. Artificial intelligence. In (S. Stich & T. Warfield, eds) Blackwell Guide to Philosophy of Mind. Blackwell. (Google)
Fetzer, J. H. 1990. Artificial Intelligence: Its Scope and Limits. Kluwer. (Cited by 22 | Google)
Gips, J. 1994. Toward the ethical robot. In (K. M. Ford, C. Glymour, & P. Hayes, eds) Android Epistemology. MIT Press. (Google)
Haugeland, J. (ed) 1981. Mind Design. MIT Press. (Cited by 71 | Google)
12 papers on the foundations of AI and cognitive science.Hayes, P. J. , Ford, K. M. , & Adams-Webber, J. R. 1994. Human reasoning about artificial intelligence. Journal of Experimental and Theoretical Artificial Intelligence 4:247-63. Reprinted in (E. Dietrich, ed) Thinking Computers and Virtual Persons. Academic Press. (Cited by 4 | Google)
Krellenstein, M. 1987. A reply to `Parallel computation and the mind-body problem'. Cognitive Science 11:155-7. (Cited by 1 | Google)
Thagard 1986 is wrong: speed and the like make no fundamental difference. With Thagard's reply: it makes a difference in practice, if not in principle.Lacey, N. & Lee, M. 2003. The epistemological foundations of artificial agents. Minds and Machines 13:339-365. Lee, M. & Lacey, N. 2003. The influence of epistemology on the design of artificial agents. Minds and Machines 13:367-395. (Cited by 1 | Google)
Moody, T. C. 1993. Philosophy and Artificial Intelligence. Prentice-Hall. (Cited by 4 | Google)
Preston, B. 1991. AI, anthropocentrism, and the evolution of "intelligence.". Minds and Machines 1:259-277. (Google)
Robinson, W. S. 1992. Computers, Minds, and Robots. Temple University Press. (Cited by 6 | Google)
Russell, S. 1991. Inductive learning by machines. Philosophical Studies 64:37-64. (Cited by 5 | Google)
A nice paper on the relationship between techniques of theory formation from machine learning and philosophical problems of induction and knowledge.Rychlak, J. F. 1991. Artificial Intelligence and Human Reason: A Teleological Critique. Columbia University Press. (Cited by 3 | Google)
Schiaffonati, V. 2003. A framework for the foundation of the philosophy of artificial intelligence. Minds and Machines 13:537-552. (Google)
Sloman, A. 1978. The Computer Revolution in Philosophy. Harvester. (Cited by 38 | Google)
All about how the computer should change the way we think about the mind.Sloman, A. 2002. The irrelevance of Turing machines to artificial intelligence. In (M. Scheutz, ed) Computationalism: New Directions. MIT Press. (Cited by 4 | Google)
Thagard, P. 1986. Parallel computation and the mind-body problem. Cognitive Science 10:301-18. (Google)
Parallelism does make a difference. Some somewhat anti-functionalist points.Thagard, P. 1990. Philosophy and machine learning. Canadian Journal of Philosophy 20:261-76. (Cited by 2 | Google)
Thagard, P. 1991. Philosophical and computational models of explanation. Philosophical Studies 64:87-104. (Cited by 3 | Google)
A comparison of philosophical and AI approaches to explanation: deductive, statistical, schematic, analogical, causal, and linguistic.Winograd, T. & Flores, F. 1987. Understanding Computers and Cognition. Addison-Wesley. (Cited by 727 | Google)