Javascript Menu by Deluxe-Menu.com
MindPapers is now part of PhilPapers: online research in philosophy, a new service with many more features.
 
 Compiled by David Chalmers (Editor) & David Bourget (Assistant Editor), Australian National University. Submit an entry.
 
   
click here for help on how to search

6.1d. Machine Consciousness (Machine Consciousness on PhilPapers)

See also:
Tson, M. E. (ms). A Brief Explanation of Consciousness.   (Google)
Abstract: This short paper (4 pages) demonstrates how subjective experience, language, and consciousness can be explained in terms of abilities we share with the simplest of creatures, specifically the ability to detect, react to, and associate various aspects of the world.
Adams, William Y. (online). Intersubjective transparency and artificial consciousness.   (Google)
Adams, William Y. (2004). Machine consciousness: Plausible idea or semantic distortion? Journal of Consciousness Studies 11 (9):46-56.   (Cited by 1 | Google)
Aleksander, Igor L. & Dunmall, B. (2003). Axioms and tests for the presence of minimal consciousness in agents I: Preamble. Journal of Consciousness Studies 10 (4):7-18.   (Cited by 13 | Google)
Aleksander, Igor L. (2007). Machine consciousness. In Max Velmans & Susan Schneider (eds.), The Blackwell Companion to Consciousness. Blackwell.   (Cited by 1 | Google | More links)
Aleksander, Igor L. (2006). Machine consciousness. In Steven Laureys (ed.), Boundaries of Consciousness. Elsevier.   (Cited by 1 | Google | More links)
Amoroso, Richard L. (1997). The theoretical foundations for engineering a conscious quantum computer. In M. Gams, M. Paprzycki & X. Wu (eds.), Mind Versus Computer: Were Dreyfus and Winograd Right? Amsterdam: IOS Press.   (Cited by 5 | Google | More links)
Angel, Leonard (1994). Am I a computer? In Eric Dietrich (ed.), Thinking Computers and Virtual Persons. Academic Press.   (Google)
Angel, Leonard (1989). How to Build a Conscious Machine. Westview Press.   (Cited by 3 | Google)
Arrabales, R. & Sanchis, A. (forthcoming). Applying machine consciousness models in autonomous situated agents. Pattern Recognition Letters.   (Google)
Arrington, Robert L. (1999). Machines, consciousness, and thought. Idealistic Studies 29 (3):231-243.   (Google)
Arrabales, R.; Ledezma, A. & Sanchis, A. (online). Modelling consciousness for autonomous robot exploration. Lecture Notes in Computer Science.   (Google)
Aydede, Murat & Guzeldere, Guven (2000). Consciousness, intentionality, and intelligence: Some foundational issues for artificial intelligence. Journal Of Experimental and Theoretical Artificial Intelligence 12 (3):263-277.   (Cited by 6 | Google | More links)
Bair, Puran K. (1981). Computer metaphors for consciousness. In The Metaphors Of Consciousness. New York: Plenum Press.   (Google)
Barnes, E. (1991). The causal history of computational activity: Maudlin and olympia. Journal of Philosophy 88 (6):304-16.   (Cited by 5 | Annotation | Google | More links)
Bell, John L. (online). Algorithmicity and consciousness.   (Google)
Abstract: Why should one believe that conscious awareness is solely the result of organizational complexity? What is the connection between consciousness and combinatorics: transformation of quantity into quality? The claim that the former is reducible to the other seems unconvincing—as unlike as chalk and cheese! In his book1 Penrose is at least attempting to compare like with like: the enigma of consciousness with the progress of physics
Birnbacher, Dieter (1995). Artificial consciousness. In Thomas Metzinger (ed.), Conscious Experience. Ferdinand Schoningh.   (Google)
Bonzon, Pierre (2003). Conscious Behavior through Reflexive Dialogs. In A. Günter, R. Kruse & B. Neumann (eds.), Lectures Notes in Artificial Intelligence. Springer.   (Google)
Abstract: We consider the problem of executing conscious behavior i.e., of driving an agent’s actions and of allowing it, at the same time, to run concurrent processes reflecting on these actions. Toward this end, we express a single agent’s plans as reflexive dialogs in a multi-agent system defined by a virtual machine. We extend this machine’s planning language by introducing two specific operators for reflexive dialogs i.e., conscious and caught for monitoring beliefs and actions, respectively. The possibility to use the same language both to drive a machine and to establish a reflexive communication within the machine itself stands as a key feature of our model.
Bringsjord, Selmer (1994). Could, how could we tell if, and should - androids have inner lives? In Kenneth M. Ford, C. Glymour & Patrick Hayes (eds.), Android Epistemology. MIT Press.   (Cited by 16 | Google)
Bringsjord, Selmer (2004). On building robot persons: Response to Zlatev. Minds and Machines 14 (3):381-385.   (Google | More links)
Abstract:   Zlatev offers surprisingly weak reasoning in support of his view that robots with the right kind of developmental histories can have meaning. We ought nonetheless to praise Zlatev for an impressionistic account of how attending to the psychology of human development can help us build robots that appear to have intentionality
Bringsjord, Selmer (2007). Offer: One billion dollars for a conscious robot; if you're honest, you must decline. Journal of Consciousness Studies 14 (7):28-43.   (Cited by 1 | Google | More links)
Abstract: You are offered one billion dollars to 'simply' produce a proof-of-concept robot that has phenomenal consciousness -- in fact, you can receive a deliciously large portion of the money up front, by simply starting a three-year work plan in good faith. Should you take the money and commence? No. I explain why this refusal is in order, now and into the foreseeable future
Bringsjord, Selmer (1992). What Robots Can and Can't Be. Kluwer.   (Cited by 85 | Google | More links)
Brockmeier, Scott (1997). Computational architecture and the creation of consciousness. The Dualist 4.   (Cited by 2 | Google)
Brown, Geoffrey (1989). Minds, Brains And Machines. St Martin's Press.   (Cited by 1 | Google)
Buttazzo, G. (2001). Artificial consciousness: Utopia or real possibility? Computer 34:24-30.   (Cited by 17 | Google | More links)
Caplain, G. (1995). Is consciousness a computational property? Informatica 19:615-19.   (Cited by 2 | Google | More links)
Caws, Peter (1988). Subjectivity in the machine. Journal for the Theory of Social Behaviour 18 (September):291-308.   (Google | More links)
Chandler, Keith A. (2002). Artificial intelligence and artificial consciousness. Philosophia 31 (1):32-46.   (Google)
Chella, Antonio & Manzotti, Riccardo (2007). Artificial Consciousness. Imprint Academic.   (Cited by 1 | Google)
Cherry, Christopher (1989). Reply--the possibility of computers becoming persons: A response to Dolby. Social Epistemology 3 (4):337-348.   (Google)
Clack, Robert J. (1968). The myth of the conscious robot. Personalist 49:351-369.   (Google)
Coles, L. S. (1993). Engineering machine consciousness. AI Expert 8:34-41.   (Google)
Cotterill, Rodney M. J. (2003). Cyberchild: A simulation test-bed for consciousness studies. Journal of Consciousness Studies 10 (4):31-45.   (Cited by 5 | Google)
Danto, Arthur C. (1960). On consciousness in machines. In Sidney Hook (ed.), Dimensions of Mind. New York University Press.   (Cited by 5 | Google)
D'Aquili, Eugene G. & Newberg, Andrew B. (1996). Consciousness and the machine. Zygon 31 (2):235-52.   (Cited by 4 | Google)
Dennett, Daniel C. (1997). Consciousness in Human and Robot Minds. In M. Ito, Y. Miyashita & Edmund T. Rolls (eds.), Cognition, Computation and Consciousness. Oxford University Press.   (Cited by 12 | Google | More links)
Abstract: The best reason for believing that robots might some day become conscious is that we human beings are conscious, and we are a sort of robot ourselves. That is, we are extraordinarily complex self-controlling, self-sustaining physical mechanisms, designed over the eons by natural selection, and operating according to the same well-understood principles that govern all the other physical processes in living things: digestive and metabolic processes, self-repair and reproductive processes, for instance. It may be wildly over-ambitious to suppose that human artificers can repeat Nature's triumph, with variations in material, form, and design process, but this is not a deep objection. It is not as if a conscious machine contradicted any fundamental laws of nature, the way a perpetual motion machine does. Still, many skeptics believe--or in any event want to believe--that it will never be done. I wouldn't wager against them, but my reasons for skepticism are mundane, economic reasons, not theoretical reasons
Dennett, Daniel C. (1994). The practical requirements for making a conscious robot. Philosophical Transactions of the Royal Society 349:133-46.   (Cited by 25 | Google | More links)
Abstract: Arguments about whether a robot could ever be conscious have been conducted up to now in the factually impoverished arena of what is possible "in principle." A team at MIT of which I am a part is now embarking on a longterm project to design and build a humanoid robot, Cog, whose cognitive talents will include speech, eye-coordinated manipulation of objects, and a host of self-protective, self-regulatory and self-exploring activities. The aim of the project is not to make a conscious robot, but to make a robot that can interact with human beings in a robust and versatile manner in real time, take care of itself, and tell its designers things about itself that would otherwise be extremely difficult if not impossible to determine by examination. Many of the details of Cog's "neural" organization will parallel what is known (or presumed known) about their counterparts in the human brain, but the intended realism of Cog as a model is relatively coarse-grained, varying opportunistically as a function of what we think we know, what we think we can build, and what we think doesn't matter. Much of what we think will of course prove to be mistaken; that is one advantage of real experiments over thought experiments
Duch, Włodzisław (2005). Brain-inspired conscious computing architecture. Journal of Mind and Behavior 26 (1-2):1-21.   (Cited by 8 | Google | More links)
Abstract: What type of artificial systems will claim to be conscious and will claim to experience qualia? The ability to comment upon physical states of a brain-like dynamical system coupled with its environment seems to be sufficient to make claims. The flow of internal states in such system, guided and limited by associative memory, is similar to the stream of consciousness. Minimal requirements for an artificial system that will claim to be conscious were given in form of specific architecture named articon. Nonverbal discrimination of the working memory states of the articon gives it the ability to experience different qualities of internal states. Analysis of the inner state flows of such a system during typical behavioral process shows that qualia are inseparable from perception and action. The role of consciousness in learning of skills, when conscious information processing is replaced by subconscious, is elucidated. Arguments confirming that phenomenal experience is a result of cognitive processes are presented. Possible philosophical objections based on the Chinese room and other arguments are discussed, but they are insufficient to refute claims articon’s claims. Conditions for genuine understanding that go beyond the Turing test are presented. Articons may fulfill such conditions and in principle the structure of their experiences may be arbitrarily close to human
Ettinger, R. C. W. (2004). To be or not to be: The zombie in the computer. In Nick Bostrom, R.C.W. Ettinger & Charles Tandy (eds.), Death and Anti-Death, Volume 2: Two Hundred Years After Kant, Fifty Years After Turing. Palo Alto: Ria University Press.   (Google)
Farrell, B. A. (1970). On the design of a conscious device. Mind 79 (July):321-346.   (Cited by 1 | Google | More links)
Farleigh, Peter (2007). The ensemble and the single mind. In Antonio Chella & Riccardo Manzotti (eds.), Artificial Consciousness. Imprint Academic.   (Google)
Franklin, Stan (2003). A conscious artifact? Journal of Consciousness Studies 10.   (Google)
Franklin, Stan (2003). Ida: A conscious artifact? Journal of Consciousness Studies 10 (4):47-66.   (Cited by 40 | Google)
Gunderson, Keith (1969). Cybernetics and mind-body problems. Inquiry 12 (1-4):406-19.   (Google)
Gunderson, Keith (1971). Mentality and Machines. Doubleday.   (Cited by 29 | Google)
Gunderson, Keith (1968). Robots, consciousness and programmed behaviour. British Journal for the Philosophy of Science 19 (August):109-22.   (Google | More links)
Haikonen, Pentti O. A. (2007). Essential issues of conscious machines. Journal of Consciousness Studies 14 (7):72-84.   (Google | More links)
Abstract: The development of conscious machines faces a number of difficult issues such as the apparent immateriality of mind, qualia and self-awareness. Also consciousness-related cognitive processes such as perception, imagination, motivation and inner speech are a technical challenge. It is foreseen that the development of machine consciousness would call for a system approach; the developer of conscious machines should consider complete systems that integrate the cognitive processes seamlessly and process information in a transparent way with representational and non-representational information-processing modes. An overview of the main issues is given and some possible solutions are outlined
Haikonen, & Pentti, O. (2007). Robot Brains: Circuits and Systems for Conscious Machines. Wiley-Interscience.   (Google | More links)
Haikonen, Pentti O. A. (2003). The Cognitive Approach to Conscious Machines. Thorverton UK: Imprint Academic.   (Cited by 20 | Google | More links)
Harnad, Stevan (2003). Can a machine be conscious? How? Journal of Consciousness Studies 10 (4):67-75.   (Cited by 16 | Google | More links)
Abstract: A "machine" is any causal physical system, hence we are machines, hence machines can be conscious. The question is: which kinds of machines can be conscious? Chances are that robots that can pass the Turing Test -- completely indistinguishable from us in their behavioral capacities -- can be conscious (i.e. feel), but we can never be sure (because of the "other-minds" problem). And we can never know HOW they have minds, because of the "mind/body" problem. We can only know how they pass the Turing Test, but not how, why or whether that makes them feel
Henley, Tracy B. (1991). Consciousness and aI: A reconsideration of Shanon. Journal of Mind and Behavior 12 (3):367-370.   (Google)
Hillis, D. (1998). Can a machine be conscious? In Stuart R. Hameroff, Alfred W. Kaszniak & A. C. Scott (eds.), Toward a Science of Consciousness II. MIT Press.   (Google)
Holland, Owen (2007). A strongly embodied approach to machine consciousness. Journal of Consciousness Studies 14 (7):97-110.   (Cited by 4 | Google | More links)
Abstract: Over sixty years ago, Kenneth Craik noted that, if an organism (or an artificial agent) carried 'a small-scale model of external reality and of its own possible actions within its head', it could use the model to behave intelligently. This paper argues that the possible actions might best be represented by interactions between a model of reality and a model of the agent, and that, in such an arrangement, the internal model of the agent might be a transparent model of the sort recently discussed by Metzinger, and so might offer a useful analogue of a conscious entity. The CRONOS project has built a robot functionally similar to a human that has been provided with an internal model of itself and of the world to be used in the way suggested by Craik; when the system is completed, it will be possible to study its operation from the perspective not only of artificial intelligence, but also of machine consciousness
Holland, Owen (ed.) (2003). Machine Consciousness. Imprint Academic.   (Cited by 19 | Google | More links)
Abstract: In this collection of essays we hear from an international array of computer and brain scientists who are actively working from both the machine and human ends...
Holland, Owen & Goodman, Russell B. (2003). Robots with internal models: A route to machine consciousness? Journal of Consciousness Studies 10 (4):77-109.   (Cited by 20 | Google | More links)
Holland, Owen; Knight, Rob & Newcombe, Richard (2007). The role of the self process in embodied machine consciousness. In Antonio Chella & Riccardo Manzotti (eds.), Artificial Consciousness. Imprint Academic.   (Google)
Joy, Glenn C. (1989). Gunderson and Searle: A common error about artificial intelligence. Southwest Philosophical Studies 28:28-34.   (Google)
Kirk, Robert E. (1986). Sentience, causation and some robots. Australasian Journal of Philosophy 64 (September):308-21.   (Cited by 1 | Annotation | Google | More links)
Kiverstein, Julian (2007). Could a robot have a subjective point of view? Journal of Consciousness Studies 14 (7):127-139.   (Cited by 2 | Google | More links)
Abstract: Scepticism about the possibility of machine consciousness comes in at least two forms. Some argue that our neurobiology is special, and only something sharing our neurobiology could be a subject of experience. Others argue that a machine couldn't be anything else but a zombie: there could never be something it is like to be a machine. I advance a dynamic sensorimotor account of consciousness which argues against both these varieties of scepticism
Levy, Donald (2003). How to psychoanalyze a robot: Unconscious cognition and the evolution of intentionality. Minds and Machines 13 (2):203-212.   (Google | More links)
Abstract:   According to a common philosophical distinction, the `original' intentionality, or `aboutness' possessed by our thoughts, beliefs and desires, is categorically different from the `derived' intentionality manifested in some of our artifacts –- our words, books and pictures, for example. Those making the distinction claim that the intentionality of our artifacts is `parasitic' on the `genuine' intentionality to be found in members of the former class of things. In Kinds of Minds: Toward an Understanding of Consciousness, Daniel Dennett criticizes that claim and the distinction it rests on, and seeks to show that ``metaphysically original intentionality'' is illusory by working out the implications he sees in the practical possibility of a certain type of robot, i.e., one that generates `utterances' which are `inscrutable to the robot's designers' so that we, and they, must consult the robot to discover the meaning of its utterances. I argue that the implications Dennett finds are erroneous, regardless of whether such a robot is possible, and therefore that the real existence of metaphysically original intentionality has not been undermined by the possibility of the robot Dennett describes
Lucas, John R. (1994). A view of one's own (conscious machines). Philosophical Transactions of the Royal Society, Series A 349:147-52.   (Google)
Lycan, William G. (1998). Qualitative experience in machines. In Terrell Ward Bynum & James H. Moor (eds.), How Computers Are Changing Philosophy. Blackwell.   (Google)
Mackay, Donald M. (1963). Consciousness and mechanism: A reply to miss Fozzy. British Journal for the Philosophy of Science 14 (August):157-159.   (Google | More links)
Mackay, Donald M. (1985). Machines, brains, and persons. Zygon 20 (December):401-412.   (Google)
Manzotti, Riccardo (2007). From artificial intelligence to artificial consciousness. In Antonio Chella & Riccardo Manzotti (eds.), Artificial Consciousness. Imprint Academic.   (Google)
Margolis, Joseph (1974). Ascribing actions to machines. Behaviorism 2:85-93.   (Google)
Marras, Ausonio (1993). Pollock on how to build a person. Dialogue 32 (3):595-605.   (Cited by 1 | Google)
Maudlin, Tim (1989). Computation and consciousness. Journal of Philosophy 86 (August):407-32.   (Cited by 24 | Annotation | Google | More links)
Mayberry, Thomas C. (1970). Consciousness and robots. Personalist 51:222-236.   (Google)
McCann, Hugh J. (2005). Intentional action and intending: Recent empirical studies. Philosophical Psychology 18 (6):737-748.   (Cited by 19 | Google | More links)
Abstract: Recent empirical work calls into question the so-called Simple View that an agent who A’s intentionally intends to A. In experimental studies, ordinary speakers frequently assent to claims that, in certain cases, agents who knowingly behave wrongly intentionally bring about the harm they do; yet the speakers tend to deny that it was the intention of those agents to cause the harm. This paper reports two additional studies that at first appear to support the original ones, but argues that in fact, the evidence of all the studies considered is best understood in terms of the Simple View.
McCarthy, John (1996). Making robots conscious of their mental states. In S. Muggleton (ed.), Machine Intelligence 15. Oxford University Press.   (Cited by 68 | Google | More links)
Abstract: In AI, consciousness of self consists in a program having certain kinds of facts about its own mental processes and state of mind. We discuss what consciousness of its own mental structures a robot will need in order to operate in the common sense world and accomplish the tasks humans will give it. It's quite a lot. Many features of human consciousness will be wanted, some will not, and some abilities not possessed by humans have already been found feasible and useful in limited contexts. We give preliminary fragments of a logical language a robot can use to represent information about its own state of mind. A robot will often have to conclude that it cannot decide a question on the basis of the information in memory and therefore must seek information externally. Gödel's idea of relative consistency is used to formalize non-knowledge. Programs with the kind of consciousness discussed in this article do not yet exist, although programs with some components of it exist. Thinking about consciousness with a view to designing it provides a new approach to some of the problems of consciousness studied by philosophers. One advantage is that it focusses on the aspects of consciousness important for intelligent behavior
McDermott, Drew (2007). Artificial intelligence and consciousness. In Philip David Zelazo, Morris Moscovitch & Evan Thompson (eds.), The Cambridge Handbook of Consciousness. Cambridge.   (Google)
McGinn, Colin (1987). Could a machine be conscious? In Colin Blakemore & Susan A. Greenfield (eds.), Mindwaves. Blackwell.   (Cited by 1 | Annotation | Google)
Mele, Alfred R. (2006). The folk concept of intentional action: A commentary. Journal of Cognition and Culture.   (Cited by 1 | Google | More links)
Abstract: In this commentary, I discuss the three main articles in this volume that present survey data relevant to a search for something that might merit the label “the folk concept of intentional action” – the articles by Joshua Knobe and Arudra Burra, Bertram Malle, and Thomas Nadelhoffer. My guiding question is this: What shape might we find in an analysis of intentional action that takes at face value the results of all of the relevant surveys about vignettes discussed in these three articles?1 To simplify exposition, I assume that there is something that merits the label I mentioned
Menant, Christophe, Proposal for an approach to artificial consciousness based on self-consciousness.   (Google | More links)
Abstract: Current research on artificial consciousness is focused on phenomenal consciousness and on functional consciousness. We propose to shift the focus to self-consciousness in order to open new areas of investigation. We use an existing scenario where self-consciousness is considered as the result of an evolution of representations. Application of the scenario to the possible build up of a conscious robot also introduces questions relative to emotions in robots. Areas of investigation are proposed as a continuation of this approach
Minsky, Marvin L. (1991). Conscious machines. In Machinery of Consciousness.   (Google)
Moffett, Marc A. & Cole Wright, Jennifer (online). The folk on know-how: Why radical intellectualism does not over-intellectualize.   (Google)
Abstract: Philosophical discussion of the nature of know-how has focused on the relation between know-how and ability. Broadly speaking, neo-Ryleans attempt to identify know-how with a certain type of ability,1 while, traditionally, intellectualists attempt to reduce it to some form of propositional knowledge. For our purposes, however, this characterization of the debate is too crude. Instead, we prefer the following more explicit taxonomy. Anti-intellectualists, as we will use the term, maintain that knowing how to ? entails the ability to ?. Dispositionalists maintain that the ability to ? is sufficient (modulo some fairly innocuous constraints) for knowing how to ?. Intellectualists, as we will use the term, deny the anti-intellectualist claim. Finally, radical intellectualists deny both the anti-intellectualist and dispositionalist claims. Pace neo-Ryleans (who in our taxonomy are those who accept both dispositionalism and anti-intellectualism), radical intellectualists maintain that the ability to ? is neither necessary nor sufficient for knowing how to ?
Nichols, Shaun (2004). Folk concepts and intuitions: From philosophy to cognitive science. Trends in Cognitive Sciences.   (Cited by 10 | Google | More links)
Abstract: Analytic philosophers have long used a priori methods to characterize folk concepts like knowledge, belief, and wrongness. Recently, researchers have begun to exploit social scientific methodologies to characterize such folk concepts. One line of work has explored folk intuitions on cases that are disputed within philosophy. A second approach, with potentially more radical implications, applies the methods of cross-cultural psychology to philosophical intuitions. Recent work suggests that people in different cultures have systematically different intuitions surrounding folk concepts like wrong, knows, and refers. A third strand of research explores the emergence and character of folk concepts in children. These approaches to characterizing folk concepts provide important resources that will supplement, and perhaps sometimes displace, a priori approaches
Pinker, Steven (online). Could a computer ever be conscious?   (Google)
Prinz, Jesse J. (2003). Level-headed mysterianism and artificial experience. Journal of Consciousness Studies 10 (4-5):111-132.   (Cited by 8 | Google)
Puccetti, Roland (1975). God and the robots: A philosophical fable. Personalist 56:29-30.   (Google)
Puccetti, Roland (1967). On thinking machines and feeling machines. British Journal for the Philosophy of Science 18 (May):39-51.   (Cited by 3 | Annotation | Google | More links)
Putnam, Hilary (1964). Robots: Machines or artificially created life? Journal of Philosophy 61 (November):668-91.   (Annotation | Google)
Rhodes, Kris (ms). Vindication of the Rights of Machine.   (Google | More links)
Abstract: In this paper, I argue that certain Machines can have rights independently of whether they are sentient, or conscious, or whatever you might call it.
Robinson, William S. (1998). Could a robot be qualitatively conscious? Aisb 99:13-18.   (Google)
Sanz, Ricardo; López, Ignacio & Bermejo-Alonso, Julita (2007). A rationale and vision for machine consciousness in complex controllers. In Antonio Chella & Riccardo Manzotti (eds.), Artificial Consciousness. Imprint Academic.   (Google)
Schlagel, Richard H. (1999). Why not artificial consciousness or thought? Minds and Machines 9 (1):3-28.   (Cited by 6 | Google | More links)
Abstract:   The purpose of this article is to show why consciousness and thought are not manifested in digital computers. Analyzing the rationale for claiming that the formal manipulation of physical symbols in Turing machines would emulate human thought, the article attempts to show why this proved false. This is because the reinterpretation of designation and meaning to accommodate physical symbol manipulation eliminated their crucial functions in human discourse. Words have denotations and intensional meanings because the brain transforms the physical stimuli received from the microworld into a qualitative, macroscopic representation for consciousness. Lacking this capacity as programmed machines, computers have no representations for their symbols to designate and mean. Unlike human beings in which consciousness and thought, with their inherent content, have emerged because of their organic natures, serial processing computers or parallel distributed processing systems, as programmed electrical machines, lack these causal capacities
Scriven, Michael (1953). The mechanical concept of mind. Mind 62 (April):230-240.   (Cited by 12 | Annotation | Google | More links)
Shanon, Benny (1991). Consciousness and the computer: A reply to Henley. Journal of Mind and Behavior 12 (3):371-375.   (Google)
Sharlow, Mark F. (ms). Can machines have first-person properties?   (Google)
Abstract: One of the most important ongoing debates in the philosophy of mind is the debate over the reality of the first-person character of consciousness.[1] Philosophers on one side of this debate hold that some features of experience are accessible only from a first-person standpoint. Some members of this camp, notably Frank Jackson, have maintained that epiphenomenal properties play roles in consciousness [2]; others, notably John R. Searle, have rejected dualism and regarded mental phenomena as entirely biological.[3] In the opposite camp are philosophers who hold that all mental capacities are in some sense computational - or, more broadly, explainable in terms of features of information processing systems.[4] Consistent with this explanatory agenda, members of this camp normally deny that any aspect of mind is accessible solely from a first-person standpoint. This denial sometimes goes very far - even as far as Dennett's claim that the phenomenology of conscious experience does not really exist
Simon, Michael A. (1969). Could there be a conscious automaton? American Philosophical Quarterly 6 (January):71-78.   (Google)
Sloman, Aaron & Chrisley, Ronald L. (2003). Virtual machines and consciousness. Journal of Consciousness Studies 10 (4-5):133-172.   (Cited by 26 | Google | More links)
Smart, J. J. C. (1959). Professor Ziff on robots. Analysis 19 (April):117-118.   (Cited by 3 | Google)
Smart, Ninian (1959). Robots incorporated. Analysis 19 (April):119-120.   (Cited by 3 | Google)
Stuart, Susan A. J. (2007). Machine consciousness: Cognitive and kinaesthetic imagination. Journal of Consciousness Studies 14 (7):141-153.   (Cited by 1 | Google | More links)
Abstract: Machine consciousness exists already in organic systems and it is only a matter of time -- and some agreement -- before it will be realised in reverse-engineered organic systems and forward- engineered inorganic systems. The agreement must be over the preconditions that must first be met if the enterprise is to be successful, and it is these preconditions, for instance, being a socially-embedded, structurally-coupled and dynamic, goal-directed entity that organises its perceptual input and enacts its world through the application of both a cognitive and kinaesthetic imagination, that I shall concentrate on presenting in this paper. It will become clear that these preconditions will present engineers with a tall order, but not, I will argue, an impossible one. After all, we might agree with Freeman and Núñez's claim that the machine metaphor has restricted the expectations of the cognitive sciences (Freeman & Núñez, 1999); but it is a double-edged sword, since our limited expectations about machines also narrow the potential of our cognitive science
Stubenberg, Leopold (1992). What is it like to be Oscar? Synthese 90 (1):1-26.   (Cited by 1 | Annotation | Google | More links)
Abstract:   Oscar is going to be the first artificial person — at any rate, he is going to be the first artificial person to be built in Tucson's Philosophy Department. Oscar's creator, John Pollock, maintains that once Oscar is complete he will experience qualia, will be self-conscious, will have desires, fears, intentions, and a full range of mental states (Pollock 1989, pp. ix–x). In this paper I focus on what seems to me to be the most problematical of these claims, viz., that Oscar will experience qualia. I argue that we have not been given sufficient reasons to believe this bold claim. I doubt that Oscar will enjoy qualitative conscious phenomena and I maintain that it will be like nothing to be Oscar
Tagliasco, Vincenzo (2007). Artificial consciousness: A technological discipline. In Antonio Chella & Riccardo Manzotti (eds.), Artificial Consciousness. Imprint Academic.   (Google)
Taylor, John G. (2007). Through machine attention to machine consciousness. In Antonio Chella & Riccardo Manzotti (eds.), Artificial Consciousness. Imprint Academic.   (Google)
Thompson, David L. (1965). Can a machine be conscious? British Journal for the Philosophy of Science 16 (May):33-43.   (Annotation | Google | More links)
Thompson, William I. (2003). The Borg or Borges? In Owen Holland (ed.), Machine Consciousness. Imprint Academic.   (Cited by 2 | Google | More links)
Torrance, Steve (2007). Two conceptions of machine phenomenality. Journal of Consciousness Studies 14 (7):154-166.   (Cited by 2 | Google | More links)
Abstract: Current approaches to machine consciousness (MC) tend to offer a range of characteristic responses to critics of the enterprise. Many of these responses seem to marginalize phenomenal consciousness, by presupposing a 'thin' conception of phenomenality. This conception is, we will argue, largely shared by anti- computationalist critics of MC. On the thin conception, physiological or neural or functional or organizational features are secondary accompaniments to consciousness rather than primary components of consciousness itself. We outline an alternative, 'thick' conception of phenomenality. This shows some signposts in the direction of a more adequate approach to MC
Tson, M. E. (ms). From Dust to Descartes: A Mechanical and Evolutionary Explanation of Consciousness and Self-Awareness.   (Google)
Abstract: Beginning with physical reactions as simple and mechanical as rust, From Dust to Descartes goes step by evolutionary step to explore how the most remarkable and personal aspects of consciousness have arisen, how our awareness of the world of ourselves differs from that of other species, and whether machines could ever become self-aware. Part I addresses a newborn’s innate abilities. Part II shows how with these and experience, we can form expectations about the world. Parts III concentrates on the essential role that others play in the formation of self-awareness. Part IV then explores what follows from this explanation of human consciousness, touching on topics such as free will, personality, intelligence, and color perception which are often associated with self-awareness and the philosophy of mind.
van de Vete, D. (1971). The problem of robot consciousness. Philosophy and Phenomenological Research 32:149-65.   (Google)
Wallace, Rodrick (2006). Pitfalls in biological computing: Canonical and idiosyncratic dysfunction of conscious machines. Mind and Matter 4 (1):91-113.   (Cited by 7 | Google | More links)
Abstract: The central paradigm of arti?cial intelligence is rapidly shifting toward biological models for both robotic devices and systems performing such critical tasks as network management, vehicle navigation, and process control. Here we use a recent mathematical analysis of the necessary conditions for consciousness in humans to explore likely failure modes inherent to a broad class of biologically inspired computing machines. Analogs to developmental psychopathology, in which regulatory mechanisms for consciousness fail progressively and subtly understress, and toinattentional blindness, where a narrow 'syntactic band pass' de?ned by the rate distortion manifold of conscious attention results in pathological ?xation, seem inevitable. Similar problems are likely to confront other possible architectures, although their mathematical description may be far less straightforward. Computing devices constructed on biological paradigms will inevitably lack the elaborate, but poorly understood, system of control mechanisms which has evolved over the last few hundred million years to stabilize consciousness in higher animals. This will make such machines prone to insidious degradation, and, ultimately, catastrophic failure
Ziff, P. (1959). The feelings of robots. Analysis 19 (January):64-68.   (Cited by 11 | Annotation | Google)