This mode searches for entries containing all the entered words in their title, author, date, comment field, or in any of many other fields showing on OPC pages.
This mode searches for entries containing the text string you entered in their author field. Note that the database does not have first names for all authors, so it is preferable to search only by surnames. If you search for a full name or a name with an initial, enter it in the format used internally, namely the "Lastname, Firstname" or "Lastname, F." format.
This mode differs from the all fields mode in two respects. First, some information not publicly available on the site is searched, e.g., abstracts and excerpts gathered by the crawler, which are not always accurate but can help broaden one's search. Second, you may prefix any term with a '+' or '-' to narrow the search to entries containing it or not containing it, respectively. Terms which are not prefixed by a '+' are not mandatory. Instead, they are weighed depending on their frequency in order to determine the best search results. You may also search for a literal string composed of several words by putting them in double quotation marks (").
Note that short and / or common words are ignored by the search engine.
Try PhilPapers to find published items which are available on a subscription basis.
Abstract: This short paper (4 pages) demonstrates how subjective experience, language, and consciousness can be explained in terms of abilities we share with the simplest of creatures, specifically the ability to detect, react to, and associate various aspects of the world.
Abstract: Why should one believe that conscious awareness is solely the result of organizational complexity? What is the connection between consciousness and combinatorics: transformation of quantity into quality? The claim that the former is reducible to the other seems unconvincing—as unlike as chalk and cheese! In his book1 Penrose is at least attempting to compare like with like: the enigma of consciousness with the progress of physics
Abstract: We consider the problem of executing conscious behavior i.e., of driving an agent’s actions and of allowing it, at the same time, to run concurrent processes reflecting on these actions. Toward this end, we express a single agent’s plans as reflexive dialogs in a multi-agent system defined by a virtual machine. We extend this machine’s planning language by introducing two specific operators for reflexive dialogs i.e., conscious and caught for monitoring beliefs and actions, respectively. The possibility to use the same language both to drive a machine and to establish a reflexive communication within the machine itself stands as a key feature of our model.
Abstract: Zlatev offers surprisingly weak reasoning in support of his view that robots with the right kind of developmental histories can have meaning. We ought nonetheless to praise Zlatev for an impressionistic account of how attending to the psychology of human development can help us build robots that appear to have intentionality
Abstract: The best reason for believing that robots might some day become conscious is that we human beings are conscious, and we are a sort of robot ourselves. That is, we are extraordinarily complex self-controlling, self-sustaining physical mechanisms, designed over the eons by natural selection, and operating according to the same well-understood principles that govern all the other physical processes in living things: digestive and metabolic processes, self-repair and reproductive processes, for instance. It may be wildly over-ambitious to suppose that human artificers can repeat Nature's triumph, with variations in material, form, and design process, but this is not a deep objection. It is not as if a conscious machine contradicted any fundamental laws of nature, the way a perpetual motion machine does. Still, many skeptics believe--or in any event want to believe--that it will never be done. I wouldn't wager against them, but my reasons for skepticism are mundane, economic reasons, not theoretical reasons
Abstract: Arguments about whether a robot could ever be conscious have been conducted up to now in the factually impoverished arena of what is possible "in principle." A team at MIT of which I am a part is now embarking on a longterm project to design and build a humanoid robot, Cog, whose cognitive talents will include speech, eye-coordinated manipulation of objects, and a host of self-protective, self-regulatory and self-exploring activities. The aim of the project is not to make a conscious robot, but to make a robot that can interact with human beings in a robust and versatile manner in real time, take care of itself, and tell its designers things about itself that would otherwise be extremely difficult if not impossible to determine by examination. Many of the details of Cog's "neural" organization will parallel what is known (or presumed known) about their counterparts in the human brain, but the intended realism of Cog as a model is relatively coarse-grained, varying opportunistically as a function of what we think we know, what we think we can build, and what we think doesn't matter. Much of what we think will of course prove to be mistaken; that is one advantage of real experiments over thought experiments
Abstract: What type of artificial systems will claim to be conscious and will claim to experience qualia? The ability to comment upon physical states of a brain-like dynamical system coupled with its environment seems to be sufficient to make claims. The flow of internal states in such system, guided and limited by associative memory, is similar to the stream of consciousness. Minimal requirements for an artificial system that will claim to be conscious were given in form of specific architecture named articon. Nonverbal discrimination of the working memory states of the articon gives it the ability to experience different qualities of internal states. Analysis of the inner state flows of such a system during typical behavioral process shows that qualia are inseparable from perception and action. The role of consciousness in learning of skills, when conscious information processing is replaced by subconscious, is elucidated. Arguments confirming that phenomenal experience is a result of cognitive processes are presented. Possible philosophical objections based on the Chinese room and other arguments are discussed, but they are insufficient to refute claims articon’s claims. Conditions for genuine understanding that go beyond the Turing test are presented. Articons may fulfill such conditions and in principle the structure of their experiences may be arbitrarily close to human
Abstract: A "machine" is any causal physical system, hence we are machines, hence machines can be conscious. The question is: which kinds of machines can be conscious? Chances are that robots that can pass the Turing Test -- completely indistinguishable from us in their behavioral capacities -- can be conscious (i.e. feel), but we can never be sure (because of the "other-minds" problem). And we can never know HOW they have minds, because of the "mind/body" problem. We can only know how they pass the Turing Test, but not how, why or whether that makes them feel
Abstract: Recent empirical work calls into question the so-called Simple View that an agent who A’s intentionally intends to A. In experimental studies, ordinary speakers frequently assent to claims that, in certain cases, agents who knowingly behave wrongly intentionally bring about the harm they do; yet the speakers tend to deny that it was the intention of those agents to cause the harm. This paper reports two additional studies that at first appear to support the original ones, but argues that in fact, the evidence of all the studies considered is best understood in terms of the Simple View.
Abstract: In AI, consciousness of self consists in a program having certain kinds of facts about its own mental processes and state of mind. We discuss what consciousness of its own mental structures a robot will need in order to operate in the common sense world and accomplish the tasks humans will give it. It's quite a lot. Many features of human consciousness will be wanted, some will not, and some abilities not possessed by humans have already been found feasible and useful in limited contexts. We give preliminary fragments of a logical language a robot can use to represent information about its own state of mind. A robot will often have to conclude that it cannot decide a question on the basis of the information in memory and therefore must seek information externally. Gödel's idea of relative consistency is used to formalize non-knowledge. Programs with the kind of consciousness discussed in this article do not yet exist, although programs with some components of it exist. Thinking about consciousness with a view to designing it provides a new approach to some of the problems of consciousness studied by philosophers. One advantage is that it focusses on the aspects of consciousness important for intelligent behavior
Abstract: In this commentary, I discuss the three main articles in this volume that present survey data relevant to a search for something that might merit the label “the folk concept of intentional action” – the articles by Joshua Knobe and Arudra Burra, Bertram Malle, and Thomas Nadelhoffer. My guiding question is this: What shape might we find in an analysis of intentional action that takes at face value the results of all of the relevant surveys about vignettes discussed in these three articles?1 To simplify exposition, I assume that there is something that merits the label I mentioned
Abstract: Current research on artificial consciousness is focused on phenomenal consciousness and on functional consciousness. We propose to shift the focus to self-consciousness in order to open new areas of investigation. We use an existing scenario where self-consciousness is considered as the result of an evolution of representations. Application of the scenario to the possible build up of a conscious robot also introduces questions relative to emotions in robots. Areas of investigation are proposed as a continuation of this approach
Abstract: Philosophical discussion of the nature of know-how has focused on the relation between know-how and ability. Broadly speaking, neo-Ryleans attempt to identify know-how with a certain type of ability,1 while, traditionally, intellectualists attempt to reduce it to some form of propositional knowledge. For our purposes, however, this characterization of the debate is too crude. Instead, we prefer the following more explicit taxonomy. Anti-intellectualists, as we will use the term, maintain that knowing how to ? entails the ability to ?. Dispositionalists maintain that the ability to ? is sufficient (modulo some fairly innocuous constraints) for knowing how to ?. Intellectualists, as we will use the term, deny the anti-intellectualist claim. Finally, radical intellectualists deny both the anti-intellectualist and dispositionalist claims. Pace neo-Ryleans (who in our taxonomy are those who accept both dispositionalism and anti-intellectualism), radical intellectualists maintain that the ability to ? is neither necessary nor sufficient for knowing how to ?
Abstract: Analytic philosophers have long used a priori methods to characterize folk concepts like knowledge, belief, and wrongness. Recently, researchers have begun to exploit social scientific methodologies to characterize such folk concepts. One line of work has explored folk intuitions on cases that are disputed within philosophy. A second approach, with potentially more radical implications, applies the methods of cross-cultural psychology to philosophical intuitions. Recent work suggests that people in different cultures have systematically different intuitions surrounding folk concepts like wrong, knows, and refers. A third strand of research explores the emergence and character of folk concepts in children. These approaches to characterizing folk concepts provide important resources that will supplement, and perhaps sometimes displace, a priori approaches
Abstract: The purpose of this article is to show why consciousness and thought are not manifested in digital computers. Analyzing the rationale for claiming that the formal manipulation of physical symbols in Turing machines would emulate human thought, the article attempts to show why this proved false. This is because the reinterpretation of designation and meaning to accommodate physical symbol manipulation eliminated their crucial functions in human discourse. Words have denotations and intensional meanings because the brain transforms the physical stimuli received from the microworld into a qualitative, macroscopic representation for consciousness. Lacking this capacity as programmed machines, computers have no representations for their symbols to designate and mean. Unlike human beings in which consciousness and thought, with their inherent content, have emerged because of their organic natures, serial processing computers or parallel distributed processing systems, as programmed electrical machines, lack these causal capacities
Abstract: One of the most important ongoing debates in the philosophy of mind is the debate over the reality of the first-person character of consciousness. Philosophers on one side of this debate hold that some features of experience are accessible only from a first-person standpoint. Some members of this camp, notably Frank Jackson, have maintained that epiphenomenal properties play roles in consciousness ; others, notably John R. Searle, have rejected dualism and regarded mental phenomena as entirely biological. In the opposite camp are philosophers who hold that all mental capacities are in some sense computational - or, more broadly, explainable in terms of features of information processing systems. Consistent with this explanatory agenda, members of this camp normally deny that any aspect of mind is accessible solely from a first-person standpoint. This denial sometimes goes very far - even as far as Dennett's claim that the phenomenology of conscious experience does not really exist
Abstract: Beginning with physical reactions as simple and mechanical as rust, From Dust to Descartes goes step by evolutionary step to explore how the most remarkable and personal aspects of consciousness have arisen, how our awareness of the world of ourselves differs from that of other species, and whether machines could ever become self-aware.
Part I addresses a newborn’s innate abilities. Part II shows how with these and experience, we can form expectations about the world. Parts III concentrates on the essential role that others play in the formation of self-awareness. Part IV then explores what follows from this explanation of human consciousness, touching on topics such as free will, personality, intelligence, and color perception which are often associated with self-awareness and the philosophy of mind.