Javascript Menu by Deluxe-Menu.com
MindPapers is now part of PhilPapers: online research in philosophy, a new service with many more features.
 
 Compiled by David Chalmers (Editor) & David Bourget (Assistant Editor), Australian National University. Submit an entry.
 
   
click here for help on how to search

6.1e. Machine Mentality, Misc (Machine Mentality, Misc on PhilPapers)

See also:
Albritton, Rogers (1964). Comments on Hilary Putnam's robots: Machines or artificially created life. Journal of Philosophy 61 (November):691-694.   (Google)
Ashby, W. R. (1947). The nervous system as physical machine: With special reference to the origin of adaptive behaviour. Mind 56 (January):44-59.   (Cited by 8 | Google | More links)
Beisecker, David (2006). Dennett's overlooked originality. Minds and Machines 16 (1):43-55.   (Google | More links)
Abstract: No philosopher has worked harder than Dan Dennett to set the possibility of machine mentality on firm philosophical footing. Dennett’s defense of this possibility has both a positive and a negative thrust. On the positive side, he has developed an account of mental activity that is tailor-made for the attribution of intentional states to purely mechanical contrivances, while on the negative side, he pillories as mystery mongering and skyhook grasping any attempts to erect barriers to the conception of machine mentality by excavating gulfs to keep us “bona fide” thinkers apart from the rest of creation. While I think he’s “won” the rhetorical tilts with his philosophical adversaries, I worry that Dennett’s negative side sometimes gets the better of him, and that this obscures advances that can be made on the positive side of his program. In this paper, I show that Dennett is much too dismissive of original intentionality in particular, and that this notion can be put to good theoretical use after all. Though deployed to distinguish different grades of mentality, it can (and should) be incorporated into a philosophical account of the mind that is recognizably Dennettian in spirit
Beloff, John (2002). Minds or machines. Truth Journal.   (Cited by 2 | Google)
Boden, Margaret A. (1995). Could a robot be creative--and would we know? In Android Epistemology. Cambridge: MIT Press.   (Cited by 6 | Google | More links)
Boden, Margaret A. (1969). Machine perception. Philosophical Quarterly 19 (January):33-45.   (Cited by 2 | Google | More links)
Bostrom, Nick (2003). Taking intelligent machines seriously: Reply to critics. Futures 35 (8):901-906.   (Google | More links)
Abstract: In an earlier paper in this journal[1], I sought to defend the claims that (1) substantial probability should be assigned to the hypothesis that machines will outsmart humans within 50 years, (2) such an event would have immense ramifications for many important areas of human concern, and that consequently (3) serious attention should be given to this scenario. Here, I will address a number of points made by several commentators
Brey, Philip (2001). Hubert Dreyfus: Humans versus computers. In American Philosophy of Technology: The Empirical Turn. Bloomington: Indiana University Press.   (Cited by 2 | Google)
Bringsjord, Selmer (1998). Cognition is not computation: The argument from irreversibility. Synthese 113 (2):285-320.   (Cited by 11 | Google | More links)
Abstract:   The dominant scientific and philosophical view of the mind – according to which, put starkly, cognition is computation – is refuted herein, via specification and defense of the following new argument: Computation is reversible; cognition isn't; ergo, cognition isn't computation. After presenting a sustained dialectic arising from this defense, we conclude with a brief preview of the view we would put in place of the cognition-is-computation doctrine
Bringsjord, Selmer (1994). Precis of What Robots Can and Can't Be. Psycholoquy 5 (59).   (Cited by 22 | Google)
Bunge, Mario (1956). Do computers think? (I). British Journal for the Philosophy of Science 7 (26):139-148.   (Cited by 1 | Google | More links)
Bunge, Mario (1956). Do computers think? (II). British Journal for the Philosophy of Science 7 (27):212-219.   (Google | More links)
Burks, Arthur W. (1973). Logic, computers, and men. Proceedings and Addresses of the American Philosophical Association 46:39-57.   (Cited by 4 | Annotation | Google)
Campbell, Richmond M. & Rosenberg, Alexander (1973). Action, purpose, and consciousness among the computers. Philosophy of Science 40 (December):547-557.   (Google | More links)
Casey, Gerard (1992). Minds and machines. American Catholic Philosophical Quarterly 66 (1):57-80.   (Cited by 3 | Google)
Abstract: The emergence of electronic computers in the last thirty years has given rise to many interesting questions. Many of these questions are technical, relating to a machine’s ability to perform complex operations in a variety of circumstances. While some of these questions are not without philosophical interest, the one question which above all others has stimulated philosophical interest is explicitly non-technical and it can be expressed crudely as follows: Can a machine be said to think and, if so, in what sense? The issue has received much attention in the scholarly journals with articles and arguments appearing in great profusion, some resolutely answering this question in the affirmative, some, equally resolutely, answering this question in the negative, and others manifesting modified rapture. While the ramifications of the question are enormous I believe that the issue at the heart of the matter has gradually emerged from the forest of complications
Cherry, Christopher (1991). Machines as persons? - I. In Human Beings. New York: Cambridge University Press.   (Google)
Cohen, L. Jonathan (1955). Can there be artificial minds? Analysis 16 (December):36-41.   (Cited by 3 | Annotation | Google)
Collins, Harry M. (2008). Response to Selinger on Dreyfus. Phenomenology and the Cognitive Sciences 7 (2).   (Google | More links)
Abstract: My claim is clear and unambiguous: no machine will pass a well-designed Turing Test unless we find some means of embedding it in lived social life. We have no idea how to do this but my argument, and all our evidence, suggests that it will not be a necessary condition that the machine have more than a minimal body. Exactly how minimal is still being worked out
Copeland, B. Jack (2000). Narrow versus wide mechanism: Including a re-examination of Turing's views on the mind-machine issue. Journal of Philosophy 97 (1):5-33.   (Cited by 42 | Google | More links)
Dayre, Kenneth M. (1968). Intelligence, bodies, and digital computers. Review of Metaphysics 21 (June):714-723.   (Google)
Dembski, William A. (1999). Are we spiritual machines? First Things 96:25-31.   (Google)
Abstract: For two hundred years materialist philosophers have argued that man is some sort of machine. The claim began with French materialists of the Enlightenment such as Pierre Cabanis, Julien La Mettrie, and Baron d’Holbach (La Mettrie even wrote a book titled Man the Machine). Likewise contemporary materialists like Marvin Minsky, Daniel Dennett, and Patricia Churchland claim that the motions and modifications of matter are sufficient to account for all human experiences, even our interior and cognitive ones. Whereas the Enlightenment philosophes might have thought of humans in terms of gear mechanisms and fluid flows, contemporary materialists think of humans in terms of neurological systems and computational devices. The idiom has been updated, but the underlying impulse to reduce mind to matter remains unchanged
Dennett, Daniel C. (1984). Can machines think? In M. G. Shafto (ed.), How We Know. Harper & Row.   (Cited by 24 | Annotation | Google)
Dennett, Daniel C. (1997). Did Hal committ murder? In D. Stork (ed.), Hal's Legacy: 2001's Computer As Dream and Reality. MIT Press.   (Google)
Abstract: The first robot homicide was committed in 1981, according to my files. I have a yellowed clipping dated 12/9/81 from the Philadelphia Inquirer--not the National Enquirer--with the headline: Robot killed repairman, Japan reports The story was an anti-climax: at the Kawasaki Heavy Industries plant in Akashi, a malfunctioning robotic arm pushed a repairman against a gearwheel-milling machine, crushing him to death. The repairman had failed to follow proper instructions for shutting down the arm before entering the workspace. Why, indeed, had this industrial accident in Japan been reported in a Philadelphia newspaper? Every day somewhere in the world a human worker is killed by one machine or another. The difference, of course, was that in the public imagination at least, this was no ordinary machine; this was a robot, a machine that might have a mind, might have evil intentions, might be capable not just of homicide but of murder
Dretske, Fred (1993). Can intelligence be artificial? Philosophical Studies 71 (2):201-16.   (Cited by 3 | Annotation | Google | More links)
Dretske, Fred (1985). Machines and the mental. Proceedings and Addresses of the American Philosophical Association 59 (1):23-33.   (Cited by 27 | Annotation | Google)
Drexler, Eric (1986). Thinking machines. In Engines of Creation. Fourth Estate.   (Cited by 1 | Google)
Dreyfus, Hubert L. (1972). What Computers Can't Do. Harper and Row.   (Cited by 847 | Annotation | Google)
Dreyfus, Hubert L. (1967). Why computers must have bodies in order to be intelligent. Review of Metaphysics 21 (September):13-32.   (Cited by 13 | Google)
Drozdek, Adam (1993). Computers and the mind-body problem: On ontological and epistemological dualism. Idealistic Studies 23 (1):39-48.   (Google)
Endicott, Ronald P. (1996). Searle, syntax, and observer-relativity. Canadian Journal of Philosophy 26 (1):101-22.   (Cited by 3 | Google)
Abstract: I critically examine some provocative arguments that John Searle presents in his book The Rediscovery of Mind to support the claim that the syntactic states of a classical computational system are "observer relative" or "mind dependent" or otherwise less than fully and objectively real. I begin by explaining how this claim differs from Searle's earlier and more well-known claim that the physical states of a machine, including the syntactic states, are insufficient to determine its semantics. In contrast, his more recent claim concerns the syntax, in particular, whether a machine actually has symbols to underlie its semantics. I then present and respond to a number of arguments that Searle offers to support this claim, including whether machine symbols are observer relative because the assignment of syntax is arbitrary, or linked to universal realizability, or linked to the sub-personal interpretive acts of a homunculus, or linked to a person's consciousness. I conclude that a realist about the computational model need not be troubled by such arguments. Their key premises need further support.
Fisher, Mark (1983). A note on free will and artificial intelligence. Philosophia 13 (September):75-80.   (Google | More links)
Fozzy, P. J. (1963). Professor MacKay on machines. British Journal for the Philosophy of Science 14 (August):154-156.   (Google | More links)
Friedland, Julian (2005). Wittgenstein and the aesthetic robot's handicap. Philosophical Investigations 28 (2):177-192.   (Google | More links)
Fulton, James S. (1957). Computing machines and minds. Personalist 38:62-72.   (Google)
Gaglio, Salvatore (2007). Intelligent artificial systems. In Antonio Chella & Riccardo Manzotti (eds.), Artificial Consciousness. Imprint Academic.   (Google)
Gams, Matjaz (ed.) (1997). Mind Versus Computer: Were Dreyfus and Winograd Right? Amsterdam: IOS Press.   (Cited by 7 | Google | More links)
Gauld, Alan (1966). Could a machine perceive? British Journal for the Philosophy of Science 17 (May):44-58.   (Cited by 3 | Google | More links)
Gogol, Daniel (1970). Determinism and the predicting machine. Philosophy and Phenomenological Research 30 (March):455-456.   (Google | More links)
Goldkind, Stuart (1982). Machines and mistakes. Ratio 24 (December):173-184.   (Cited by 1 | Google)
Goldberg, Sanford C. (1997). The very idea of computer self-knowledge and self-deception. Minds and Machines 7 (4):515-529.   (Cited by 5 | Google | More links)
Abstract:   Do computers have beliefs? I argue that anyone who answers in the affirmative holds a view that is incompatible with what I shall call the commonsense approach to the propositional attitudes. My claims shall be two. First,the commonsense view places important constraints on what can be acknowledged as a case of having a belief. Second, computers – at least those for which having a belief would be conceived as having a sentence in a belief box – fail to satisfy some of these constraints. This second claim can best be brought out in the context of an examination of the idea of computer self-knowledge and self-deception, but the conclusion is perfectly general: the idea that computers are believers, like the idea that computers could have self-knowledge or be self-deceived, is incompatible with the commonsense view. The significance of the argument lies in the choice it forces on us: whether to revise our notion of belief so as to accommodate the claim that computers are believers, or to give up on that claim so as to preserve our pretheoretic notion of the attitudes. We cannot have it both ways
Gomila, Antoni (1995). From cognitive systems to persons. In Android Epistemology. Cambridge: MIT Press.   (Cited by 2 | Google)
Gunderson, Keith (1963). Interview with a robot. Analysis 23 (June):136-142.   (Cited by 2 | Google)
Gunderson, Keith (1985). Mentality And Machines, Second Edition. Minneapolis: University Minnesota Press.   (Google)
Hauser, Larry (1993). The sense of thinking. Minds and Machines 3 (1):21-29.   (Cited by 3 | Google | More links)
Abstract:   It will be found that the great majority, given the premiss that thought is not distinct from corporeal motion, take a much more rational line and maintain that thought is the same in the brutes as in us, since they observe all sorts of corporeal motions in them, just as in us. And they will add that the difference, which is merely one of degree, does not imply any essential difference; from this they will be quite justified in concluding that, although there may be a smaller degree of reason in the beasts than there is in us, the beasts possess minds which are of exactly the same type as ours. (Descartes 1642: 288–289.)
Hauser, Larry (1993). Why isn't my pocket calculator a thinking thing? Minds and Machines 3 (1):3-10.   (Cited by 11 | Google | More links)
Abstract: My pocket calculator (Cal) has certain arithmetical abilities: it seems Cal calculates. That calculating is thinking seems equally untendentious. Yet these two claims together provide premises for a seemingly valid syllogism whose conclusion -- Cal thinks -- most would deny. I consider several ways to avoid this conclusion, and find them mostly wanting. Either we ourselves can't be said to think or calculate if our calculation-like performances are judged by the standards proposed to rule out Cal; or the standards -- e.g., autonomy and self-consciousness -- make it impossible to verify whether anything or anyone (save myself) meets them. While appeals to the intentionality of thought or the unity of minds provide more credible lines of resistance, available accounts of intentionality and mental unity are insufficiently clear and warranted to provide very substantial arguments against Cal's title to be called a thinking thing. Indeed, considerations favoring granting that title are more formidable than generally appreciated
Heffernan, James D. (1978). Some doubts about Turing machine arguments. Philosophy of Science 45 (December):638-647.   (Google | More links)
Henley, Tracy B. (1990). Natural problems and artificial intelligence. Behavior and Philosophy 18:43-55.   (Cited by 4 | Annotation | Google)
Joske, W. D. (1972). Deliberating machines. Philosophical Papers 1 (October):57-66.   (Google)
Kary, Michael & Mahner, Martin (2002). How would you know if you synthesized a thinking thing? Minds and Machines 12 (1):61-86.   (Cited by 1 | Google | More links)
Abstract:   We confront the following popular views: that mind or life are algorithms; that thinking, or more generally any process other than computation, is computation; that anything other than a working brain can have thoughts; that anything other than a biological organism can be alive; that form and function are independent of matter; that sufficiently accurate simulations are just as genuine as the real things they imitate; and that the Turing test is either a necessary or sufficient or scientific procedure for evaluating whether or not an entity is intelligent. Drawing on the distinction between activities and tasks, and the fundamental scientific principles of ontological lawfulness, epistemological realism, and methodological skepticism, we argue for traditional scientific materialism of the emergentist kind in opposition to the functionalism, behaviourism, tacit idealism, and merely decorative materialism of the artificial intelligence and artificial life communities
Kearns, John T. (1997). Thinking machines: Some fundamental confusions. Minds and Machines 7 (2):269-87.   (Cited by 8 | Google | More links)
Abstract:   This paper explores Church's Thesis and related claims madeby Turing. Church's Thesis concerns computable numerical functions, whileTuring's claims concern both procedures for manipulating uninterpreted marksand machines that generate the results that these procedures would yield. Itis argued that Turing's claims are true, and that they support (the truth of)Church's Thesis. It is further argued that the truth of Turing's and Church'sTheses has no interesting consequences for human cognition or cognitiveabilities. The Theses don't even mean that computers can do as much as peoplecan when it comes to carrying out effective procedures. For carrying out aprocedure is a purposive, intentional activity. No actual machine does, orcan do, as much
Krishna, Daya (1961). "Lying" and the compleat robot. British Journal for the Philosophy of Science 12 (August):146-149.   (Cited by 1 | Google | More links)
Kugel, Peter (2002). Computing machines can't be intelligent (...And Turing said so). Minds and Machines 12 (4):563-579.   (Cited by 4 | Google | More links)
Abstract:   According to the conventional wisdom, Turing (1950) said that computing machines can be intelligent. I don''t believe it. I think that what Turing really said was that computing machines –- computers limited to computing –- can only fake intelligence. If we want computers to become genuinelyintelligent, we will have to give them enough initiative (Turing, 1948, p. 21) to do more than compute. In this paper, I want to try to develop this idea. I want to explain how giving computers more ``initiative'''' can allow them to do more than compute. And I want to say why I believe (and believe that Turing believed) that they will have to go beyond computation before they can become genuinely intelligent
Lanier, Jaron (ms). Mindless thought experiments (a critique of machine intelligence).   (Google)
Abstract: Since there isn't a computer that seems conscious at this time, the idea of machine consciousness is supported by thought experiments. Here's one old chestnut: "What if you replaced your neurons one by one with neuron sized and shaped substitutes made of silicon chips that perfectly mimicked the chemical and electric functions of the originals? If you just replaced one single neuron, surely you'd feel the same. As you proceed, as more and more neurons are replaced, you'd stay conscious. Why wouldn't you still be conscious at the end of the process, when you'd reside in a brain shaped glob of silicon? And why couldn't the resulting replacement brain have been manufactured by some other means?"
Lanier, Jaron (1998). Three objections to the idea of artificial intelligence. In Stuart R. Hameroff, Alfred W. Kaszniak & A. C. Scott (eds.), Toward a Science of Consciousness II. MIT Press.   (Google)
Laymon, Ronald E. (1988). Some computers can add (even if the IBM 1620 couldn't): Defending eniac's accumulators against Dretske. Behaviorism 16:1-16.   (Google)
Lind, Richard W. (1986). The priority of attention: Intentionality for automata. The Monist 69 (October):609-619.   (Cited by 1 | Google)
Long, Douglas C. (1994). Why Machines Can Neither Think nor Feel. In Dale W. Jamieson (ed.), Language, Mind and Art. Kluwer.   (Cited by 1 | Google)
Abstract: Over three decades ago, in a brief but provocative essay, Paul Ziff argued for the thesis that robots cannot have feelings because they are "mechanisms, not organisms, not living creatures. There could be a broken-down robot but not a dead one. Only living creatures can literally have feelings."[i] Since machines are not living things they cannot have feelings
Mackay, Donald M. (1951). Mind-life behavior in artifacts. British Journal for the Philosophy of Science 2 (August):105-21.   (Google | More links)
Mackay, Donald M. (1952). Mentality in machines. Proceedings of the Aristotelian Society 26:61-86.   (Cited by 11 | Google)
Mackay, Donald M. (1952). Mentality in machines, part III. Proceedings of the Aristotelian Society 61:61-86.   (Google)
Mackay, Donald M. (1962). The use of behavioural language to refer to mechanical processes. British Journal for the Philosophy of Science 13 (August):89-103.   (Cited by 9 | Google | More links)
Manning, Rita C. (1987). Why Sherlock Holmes can't be replaced by an expert system. Philosophical Studies 51 (January):19-28.   (Cited by 3 | Annotation | Google | More links)
Mays, W. (1952). Can machines think? Philosophy 27 (April):148-62.   (Cited by 7 | Google)
McCarthy, John (1979). Ascribing mental qualities to machines. In Martin Ringle (ed.), Philosophical Perspectives in Artificial Intelligence. Humanities Press.   (Cited by 168 | Google | More links)
Abstract: Ascribing mental qualities like beliefs, intentions and wants to a machine is sometimes correct if done conservatively and is sometimes necessary to express what is known about its state. We propose some new definitional tools for this: definitions relative to an approximate theory and second order structural definitions
McNamara, Paul (1993). Comments on can intelligence be artificial? Philosophical Studies 71 (2):217-222.   (Google | More links)
Minsky, Marvin L. (1968). Matter, minds, models. In Marvin L. Minsky (ed.), Semantic Information Processing. MIT Press.   (Cited by 18 | Google)
Minsky, Marvin L. (1982). Why people think computers can't. AI Magazine Fall 1982.   (Cited by 32 | Google | More links)
Abstract: Most people think computers will never be able to think. That is, really think. Not now or ever. To be sure, most people also agree that computers can do many things that a person would have to be thinking to do. Then how could a machine seem to think but not actually think? Well, setting aside the question of what thinking actually is, I think that most of us would answer that by saying that in these cases, what the computer is doing is merely a superficial imitation of human intelligence. It has been designed to obey certain simple commands, and then it has been provided with programs composed of those commands. Because of this, the computer has to obey those commands, but without any idea of what's happening
Nanay, Bence (2006). Symmetry between the intentionality of minds and machines? The biological plausibility of Dennett's position. Minds and Machines 16 (1):57-71.   (Google | More links)
Abstract: One of the most influential arguments against the claim that computers can think is that while our intentionality is intrinsic, that of computers is derived: it is parasitic on the intentionality of the programmer who designed the computer-program. Daniel Dennett chose a surprising strategy for arguing against this asymmetry: instead of denying that the intentionality of computers is derived, he endeavours to argue that human intentionality is derived too. I intend to examine that biological plausibility of Dennett’s suggestion and show that Dennett’s argument for the claim that human intentionality is derived because it was designed by natural selection is based on the misunderstanding of how natural selection works
Negley, Glenn (1951). Cybernetics and theories of mind. Journal of Philosophy 48 (September):574-82.   (Cited by 2 | Google | More links)
Pinsky, Leonard (1951). Do machines think about machines thinking? Mind 60 (July):397-398.   (Google | More links)
Preston, Beth (1995). The ontological argument against the mind-machine hypothesis. Philosophical Studies 80 (2):131-57.   (Annotation | Google | More links)
Proudfoot, Diane (2004). The implications of an externalist theory of rule-following behavior for robot cognition. Minds and Machines 14 (3):283-308.   (Google | More links)
Abstract:   Given (1) Wittgensteins externalist analysis of the distinction between following a rule and behaving in accordance with a rule, (2) prima facie connections between rule-following and psychological capacities, and (3) pragmatic issues about training, it follows that most, even all, future artificially intelligent computers and robots will not use language, possess concepts, or reason. This argument suggests that AIs traditional aim of building machines with minds, exemplified in current work on cognitive robotics, is in need of substantial revision
Puccetti, Roland (1966). Can humans think? Analysis 26 (June):198-202.   (Google)
Putnam, Hilary (1967). The mental life of some machines. In Hector-Neri Castaneda (ed.), Intentionality, Minds and Perception. Wayne State University Press.   (Cited by 37 | Annotation | Google)
Pylyshyn, Zenon W. (1975). Minds, machines and phenomenology: Some reflections on Dreyfus' What Computers Can't Do. Cognition 3:57-77.   (Cited by 7 | Google)
Rapaport, William J. (1993). Because mere calculating isn't thinking: Comments on Hauser's Why Isn't My Pocket Calculator a Thinking Thing?. Minds and Machines 3 (1):11-20.   (Cited by 5 | Google | More links)
Rapaport, William J. (online). Computer processes and virtual persons: Comments on Cole's "artificial intelligence and personal identity".   (Cited by 7 | Google | More links)
Abstract: This is a draft of the written version of comments on a paper by David Cole, presented orally at the American Philosophical Association Central Division meeting in New Orleans, 27 April 1990. Following the written comments are 2 appendices: One contains a letter to Cole updating these comments. The other is the handout from the oral presentation
Ritchie, Graeme (2007). Some empirical criteria for attributing creativity to a computer program. Minds and Machines 17 (1).   (Google | More links)
Abstract: Over recent decades there has been a growing interest in the question of whether computer programs are capable of genuinely creative activity. Although this notion can be explored as a purely philosophical debate, an alternative perspective is to consider what aspects of the behaviour of a program might be noted or measured in order to arrive at an empirically supported judgement that creativity has occurred. We sketch out, in general abstract terms, what goes on when a potentially creative program is constructed and run, and list some of the relationships (for example, between input and output) which might contribute to a decision about creativity. Specifically, we list a number of criteria which might indicate interesting properties of a program’s behaviour, from the perspective of possible creativity. We go on to review some ways in which these criteria have been applied to actual implementations, and some possible improvements to this way of assessing creativity
Ronald, E. & Sipper, Moshe (2001). Intelligence is not enough: On the socialization of talking machines. Minds and Machines 11 (4):567-576.   (Cited by 3 | Google | More links)
Abstract:   Since the introduction of the imitation game by Turing in 1950 there has been much debate as to its validity in ascertaining machine intelligence. We wish herein to consider a different issue altogether: granted that a computing machine passes the Turing Test, thereby earning the label of ``Turing Chatterbox'', would it then be of any use (to us humans)? From the examination of scenarios, we conclude that when machines begin to participate in social transactions, unresolved issues of trust and responsibility may well overshadow any raw reasoning ability they possess
Baker, Lynne Rudder (1981). Why computers can't act. American Philosophical Quarterly 18 (April):157-163.   (Cited by 6 | Google)
Schmidt, C. T. A. (2005). Of robots and believing. Minds and Machines 15 (2):195-205.   (Cited by 6 | Google | More links)
Abstract: Discussion about the application of scientific knowledge in robotics in order to build people helpers is widespread. The issue herein addressed is philosophically poignant, that of robots that are “people”. It is currently popular to speak about robots and the image of Man. Behind this lurks the dialogical mind and the questions about the significance of an artificial version of it. Without intending to defend or refute the discourse in favour of ‘recreating’ Man, a lesser familiar question is brought forth: “and what if we were capable of creating a very convincible replica of man (constructing a robot-person), what would the consequences of this be and would we be satisfied with such technology?” Thorny topic; it questions the entire knowledge foundation upon which strong AI/Robotics is positioned. The author argues for improved monitoring of technological progress and thus favours implementing weaker techniques
Scriven, Michael (1960). The compleat robot: A prolegomena to androidology. In Sidney Hook (ed.), Dimensions of Mind. New York University Press.   (Cited by 6 | Annotation | Google)
Scriven, Michael (1963). The supercomputer as liar. British Journal for the Philosophy of Science 13 (February):313-314.   (Google | More links)
Selinger, Evan (2008). Collins's incorrect depiction of Dreyfus's critique of artificial intelligence. Phenomenology and the Cognitive Sciences 7 (2).   (Google)
Abstract: Harry Collins interprets Hubert Dreyfus’s philosophy of embodiment as a criticism of all possible forms of artificial intelligence. I argue that this characterization is inaccurate and predicated upon a misunderstanding of the relevance of phenomenology for empirical scientific research
Sloman, Aaron (1986). What sorts of machines can understand the symbols they use? Proceedings of the Aristotelian Society 61:61-80.   (Cited by 4 | Google)
Spilsbury, R. J. (1952). Mentality in machines. Proceedings of the Aristotelian Society 26:27-60.   (Cited by 2 | Google)
Spilsbury, R. J. (1952). Mentality in machines, part II. Proceedings of the Aristotelian Society 27:27-60.   (Google)
Srzednicki, Jan (1962). Could machines talk? Analysis 22 (April):113-117.   (Google)
Stahl, Bernd Carsten (2006). Responsible computers? A case for ascribing quasi-responsibility to computers independent of personhood or agency. Ethics and Information Technology 8 (4):205-213.   (Google | More links)
Abstract: There has been much debate whether computers can be responsible. This question is usually discussed in terms of personhood and personal characteristics, which a computer may or may not possess. If a computer fulfils the conditions required for agency or personhood, then it can be responsible; otherwise not. This paper suggests a different approach. An analysis of the concept of responsibility shows that it is a social construct of ascription which is only viable in certain social contexts and which serves particular social aims. If this is the main aspect of responsibility then the question whether computers can be responsible no longer hinges on the difficult problem of agency but on the possibly simpler question whether responsibility ascriptions to computers can fulfil social goals. The suggested solution to the question whether computers can be subjects of responsibility is the introduction of a new concept, called “quasi-responsibility” which will emphasise the social aim of responsibility ascription and which can be applied to computers
Tallis, Raymond C. (2004). Why the Mind Is Not a Computer: A Pocket Lexicon of Neuromythology. Thorverton UK: Imprint Academic.   (Cited by 1 | Google | More links)
Abstract: Taking a series of key words such as calculation, language, information and memory, Professor Tallis shows how their misuse has lured a whole generation into...
Taube, M. (1961). Computers And Common Sense: The Myth Of Thinking Machines. Ny: Columbia University Press.   (Cited by 12 | Google)
Velleman, J. David (online). Artificial agency.   (Google | More links)
Abstract: I argue that participants in a virtual world such as "Second Life" exercise genuine agency via their avatars. Indeed, their avatars are fictional bodies with which they act in the virtual world, just as they act in the real world with their physical bodies. Hence their physical bodies can be regarded as their default avatars. I also discuss recent research into "believable" software agents, which are designed on principles borrowed from the character-based arts, especially cinematic animation as practiced by the artists at Disney and Warner Brothers Studios. I claim that these agents exemplify a kind of autonomy that should be of greater interest to philosophers than that exemplified by the generic agent modeled in current philosophical theory. The latter agent is autonomous by virtue of being governed by itself; but a believable agent appears to be governed by a self, which is the anima by which it appears to be animated. Putting these two discussions together, I suggest that philosophers of action should focus their attention on how we animate our bodies
Wait, Eldon C. (2006). What computers could never do. In Analecta Husserliana: The Yearbook of Phenomenological Research, Volume XD:Artificial Intelligence;Experience;Premise;Searle, John R. Dordrecht: Springer.   (Google)
Waldrop, Mitchell (1990). Can computers think? In R. Kurzweil (ed.), The Age of Intelligent Machines. MIT Press.   (Cited by 2 | Google)
Wallace, Rodrick (ms). New mathematical foundations for AI and alife: Are the necessary conditions for animal consciousness sufficient for the design of intelligent machines?   (Google | More links)
Abstract: Rodney Brooks' call for 'new mathematics' to revitalize the disciplines of artificial intelligence and artificial life can be answered by adaptation of what Adams has called 'the informational turn in philosophy', aided by the novel perspectives that program gives regarding empirical studies of animal cognition and consciousness. Going backward from the necessary conditions communication theory imposes on animal cognition and consciousness to sufficient conditions for machine design is, however, an extraordinarily difficult engineering task. The most likely use of the first generations of conscious machines will be to model the various forms of psychopathology, since we have little or no understanding of how consciousness is stabilized in humans or other animals
Weiss, Paul A. (1990). On the impossibility of artificial intelligence. Review of Metaphysics (December) 335 (December):335-341.   (Google)
Whiteley, C. H. (1956). Note on the concept of mind. Analysis 16 (January):68-70.   (Google)
Whobrey, Darren (2001). Machine mentality and the nature of the ground relation. Minds and Machines 11 (3):307-346.   (Cited by 7 | Google | More links)
Abstract:   John Searle distinguished between weak and strong artificial intelligence (AI). This essay discusses a third alternative, mild AI, according to which a machine may be capable of possessing a species of mentality. Using James Fetzer's conception of minds as semiotic systems, the possibility of what might be called ``mild AI'' receives consideration. Fetzer argues against strong AI by contending that digital machines lack the ground relationship required of semiotic systems. In this essay, the implementational nature of semiotic processes posited by Charles S. Peirce's triadic sign relation is re-examined in terms of the underlying dispositional processes and the ontological levels they would span in an inanimate machine. This suggests that, if non-human mentality can be replicated rather than merely simulated in a digital machine, the direction to pursue appears to be that of mild AI
Wilks, Yorick (1976). Dreyfus's disproofs. Britis Journal for the Philosophy of Science 27 (2).   (Cited by 1 | Google | More links)
Wisdom, John O. (1952). Mentality in machines, part I. Proceedings of the Aristotelian Society 1:1-26.   (Google)