Javascript Menu by Deluxe-Menu.com
MindPapers is now part of PhilPapers: online research in philosophy, a new service with many more features.
 
 Compiled by David Chalmers (Editor) & David Bourget (Assistant Editor), Australian National University. Submit an entry.
 
   
click here for help on how to search

6.2d. AI without Representation? (AI without Representation? on PhilPapers)

See also:
Andrews, Kristin (web). Critter psychology: On the possibility of nonhuman animal folk psychology. In Daniel D. Hutto & Matthew Ratcliffe (eds.), Folk Psychology Re-Assessed. Kluwer/Springer Press.   (Google | More links)
Abstract: Humans have a folk psychology, without question. Paul Churchland used the term to describe “our commonsense conception of psychological phenomena” (Churchland 1981, p. 67), whatever that may be. When we ask the question whether animals have their own folk psychology, we’re asking whether any other species has a commonsense conception of psychological phenomenon as well. Different versions of this question have been discussed over the past 25 years, but no clear answer has emerged. Perhaps one reason for this lack of progress is that we don’t clearly understand the question. In asking whether animals have folk psychology, I hope to help clarify the concept of folk psychology itself, and in the process, to gain a greater understanding of the role of belief and desire attribution in human social interaction
Bechtel, William P. (1996). Yet another revolution? Defusing the dynamical system theorists' attack on mental representations. Presidential Address to Society of Philosophy and Psychology.   (Cited by 1 | Google)
Brooks, Rodney (1991). Intelligence without representation. Artificial Intelligence 47:139-159.   (Cited by 2501 | Annotation | Google | More links)
Abstract: Artificial intelligence research has foundered on the issue of representation. When intelligence is approached in an incremental manner, with strict reliance on interfacing to the real world through perception and action, reliance on representation disappears. In this paper we outline our approach to incrementally building complete intelligent Creatures. The fundamental decomposition of the intelligent system is not into independent information processing units which must interface with each other via representations. Instead, the intelligent system is decomposed into independent and parallel activity producers which all interface directly to the world through perception and action, rather than interface to each other particularly much. The notions of central and peripheral systems evaporateeverything is both central and peripheral. Based on these principles we have built a very successful series of mobile robots which operate without supervision as Creatures in standard office environments
Clark, Andy & Toribio, Josefa (1994). Doing without representing. Synthese 101 (3):401-31.   (Cited by 97 | Annotation | Google | More links)
Abstract:   Connectionism and classicism, it generally appears, have at least this much in common: both place some notion of internal representation at the heart of a scientific study of mind. In recent years, however, a much more radical view has gained increasing popularity. This view calls into question the commitment to internal representation itself. More strikingly still, this new wave of anti-representationalism is rooted not in armchair theorizing but in practical attempts to model and understand intelligent, adaptive behavior. In this paper we first present, and then critically assess, a variety of recent anti-representationalist treatments. We suggest that so far, at least, the sceptical rhetoric outpaces both evidence and argument. Some probable causes of this premature scepticism are isolated. Nonetheless, the anti-representationalist challenge is shown to be both important and progressive insofar as it forces us to see beyond the bare representational/non-representational dichotomy and to recognize instead a rich continuum of degrees and types of representationality
Dennett, Daniel C. (1989). Cognitive ethology. In Goals, No-Goals and Own Goals. Unwin Hyman.   (Cited by 15 | Google)
Abstract: The field of Artificial Intelligence has produced so many new concepts--or at least vivid and more structured versions of old concepts--that it would be surprising if none of them turned out to be of value to students of animal behavior. Which will be most valuable? I will resist the temptation to engage in either prophecy or salesmanship; instead of attempting to answer the question: "How might Artificial Intelligence inform the study of animal behavior?" I will concentrate on the obverse: "How might the study of animal behavior inform research in Artificial Intelligence?"
Millikan, Ruth G. (online). On reading signs.   (Cited by 1 | Google | More links)
Abstract: On Reading Signs; Some Differences between Us and The Others If there are certain kinds of signs that an animal cannot learn to interpret, that might be for any of a number of reasons. It might be, first, because the animal cannot discriminate the signs from one another. For example, although human babies learn to discriminate human speech sounds according to the phonological structures of their native languages very easily, it may be that few if any other animals are capable of fully grasping the phonological structures of human languages. If an animal cannot learn to interpret certain signs it might be, second, because the decoding is too difficult for it. It could be, for example, that some animals are incapable of decoding signs that exhibit syntactic embedding, or signs that are spread out over time as opposed to over space. Problems of these various kinds might be solved by using another sign system, say, gestures rather than noises, or visual icons laid out in spatial order, or by separating out embedded propositions and presenting each separately. But a more interesting reason that an animal might be incapable of understanding a sign would be that it lacked mental representations of the necessary kind. It might be incapable of representing mentally what the sign conveys. When discussing what signs animals can understand or
Keijzer, Fred A. (1998). Doing without representations which specify what to do. Philosophical Psychology 11 (3):269-302.   (Cited by 15 | Google)
Abstract: A discussion is going on in cognitive science about the use of representations to explain how intelligent behavior is generated. In the traditional view, an organism is thought to incorporate representations. These provide an internal model that is used by the organism to instruct the motor apparatus so that the adaptive and anticipatory characteristics of behavior come about. So-called interactionists claim that this representational specification of behavior raises more problems than it solves. In their view, the notion of internal representational models is to be dispensed with. Instead, behavior is to be explained as the intricate interaction between an embodied organism and the specific make up of an environment. The problem with a non-representational interactive account is that it has severe difficulties with anticipatory, future oriented behavior. The present paper extends the interactionist conceptual framework by drawing on ideas derived from the study of morphogenesis. This extended interactionist framework is based on an analysis of anticipatory behavior as a process which involves multiple spatio-temporal scales of neural, bodily and environmental dynamics. This extended conceptual framework provides the outlines for an explanation of anticipatory behavior without involving a representational specification of future goal states
Kirsh, David (1991). Today the earwig, tomorrow man? Artificial Intelligence 47:161-184.   (Cited by 111 | Google | More links)
Abstract: A startling amount of intelligent activity can be controlled without reasoning or thought. By tuning the perceptual system to task relevant properties a creature can cope with relatively sophisticated environments without concepts. There is a limit, however, to how far a creature without concepts can go. Rod Brooks, like many ecologically oriented scientists, argues that the vast majority of intelligent behaviour is concept-free. To evaluate this position I consider what special benefits accrue to concept-using creatures. Concepts are either necessary for certain types of perception, learning, and control, or they make those processes computationally simpler. Once a creature has concepts its capacities are vastly multiplied.
Müller, Vincent C. (2007). Is there a future for AI without representation? Minds and Machines 17 (1).   (Google | More links)
Abstract: This paper investigates the prospects of Rodney Brooks’ proposal for AI without representation. It turns out that the supposedly characteristic features of “new AI” (embodiment, situatedness, absence of reasoning, and absence of representation) are all present in conventional systems: “New AI” is just like old AI. Brooks proposal boils down to the architectural rejection of central control in intelligent agents—Which, however, turns out to be crucial. Some of more recent cognitive science suggests that we might do well to dispose of the image of intelligent agents as central representation processors. If this paradigm shift is achieved, Brooks’ proposal for cognition without representation appears promising for full-blown intelligent agents—Though not for conscious agents
van Gelder, Tim (1995). What might cognition be if not computation? Journal of Philosophy 92 (7):345-81.   (Cited by 266 | Annotation | Google | More links)
Wallis, Peter (2004). Intention without representation. Philosophical Psychology 17 (2):209-223.   (Cited by 3 | Google | More links)
Abstract: A mechanism for planning ahead would appear to be essential to any creature with more than insect level intelligence. In this paper it is shown how planning, using full means-ends analysis, can be had while avoiding the so called symbol grounding problem. The key role of knowledge representation in intelligence has been acknowledged since at least the enlightenment, but the advent of the computer has made it possible to explore the limits of alternate schemes, and to explore the nature of our everyday understanding of the world around us. In particular, artificial intelligence (AI) and robotics has forced a close examination, by people other than philosophers, of what it means to say for instance that "snow is white." One interpretation of the "new AI" is that it is questioning the need for representation altogether. Brooks and others have shown how a range of intelligent behaviors can be had without representation, and this paper goes one step further showing how intending to do things can be achieved without symbolic representation. The paper gives a concrete example of a mechanism in terms of robots that play soccer. It describes a belief, desire and intention (BDI) architecture that plans in terms of activities. The result is a situated agent that plans to do things with no more ontological commitment than the reactive systems Brooks described in his seminal paper, "Intelligence without Representation."
Webber, Jonathan (2002). Doing without representation: Coping with Dreyfus. Philosophical Explorations 5 (1):82-88.   (Google | More links)
Abstract: Hubert Dreyfus argues that the traditional and currently dominant conception of an action, as an event initiated or governed by a mental representation of a possible state of affairs that the agent is trying to realise, is inadequate. If Dreyfus is right, then we need a new conception of action. I argue, however, that the considerations that Dreyfus adduces show only that an action need not be initiated or governed by a conceptual representation, but since a representation need not be conceptually structured, do not show that we need a conception of action that does not involve representation