Javascript Menu by
MindPapers is now part of PhilPapers: online research in philosophy, a new service with many more features.
 Compiled by David Chalmers (Editor) & David Bourget (Assistant Editor), Australian National University. Submit an entry.
click here for help on how to search

6.4h. Robotics (Robotics on PhilPapers)

See also:
Beavers, Anthony F., Between angels and animals: The question of robot ethics, or is Kantian moral agency desirable?   (Google)
Abstract: In this paper, I examine a variety of agents that appear in Kantian ethics in order to determine which would be necessary to make a robot a genuine moral agent. However, building such an agent would require that we structure into a robot’s behavioral repertoire the possibility for immoral behavior, for only then can the moral law, according to Kant, manifest itself as an ought, a prerequisite for being able to hold an agent morally accountable for its actions. Since building a moral robot requires the possibility of immoral behavior, I go on to argue that we cannot morally want robots to be genuine moral agents, but only beings that simulate moral behavior. Finally, I raise but do not answer the question that if morality requires us to want robots that are not genuine moral agents, why should we want something different in the case of human beings
Breazeal, C. & Brooks, Rodney (2004). Robot emotions: A functional perspective. In J. Fellous (ed.), Who Needs Emotions. Oxford University Press.   (Google)
Brooks, Rodney A. & Stein, Lynn Andrea (1994). Building brains for bodies. Autonomous Robotics 1 (1):7-25.   (Cited by 281 | Google | More links)
Abstract: We describe a project to capitalize on newly available levels of computational resources in order to understand human cognition. We are building an integrated physical system including vision, sound input and output, and dextrous manipulation, all controlled by a continuously operating large scale parallel MIMD computer. The resulting system will learn to "think" by building on its bodily experiences to accomplish progressively more abstract tasks. Past experience suggests that in attempting to build such an integrated system we will have to fundamentally change the way artificial intelligence, cognitive science, linguistics, and philosophy think about the organization of intelligence. We expect to be able to better reconcile the theories that will be developed with current work in neuroscience
Brooks, Rodney (1991). Challenges for Complete Creature Architectures. In Jean-Arcady Meyer & Stewart W. Wilson (eds.), From Animals to Animats: Proceedings of The First International Conference on Simulation of Adaptive Behavior (Complex Adaptive Systems). MIT Press.   (Cited by 71 | Google | More links)
Abstract: boundaries. It is impossible to do good science without having an appreciation for the problems and concepts in the other levels of abstraction (at least in the direction from biology towards physics), but there are whole sets of tools, methods of analysis, theories and explanations within each discipline which do not cross those boundaries
Brooks, Rodney A.; Breazeal, Cynthia; Marjanovic, Matthew; Scassellati, Brian & Williamson, Matthew (1999). The cog project: Building a humanoid robot. Lecture Notes in Computer Science 1562:52-87.   (Cited by 302 | Google | More links)
Abstract: To explore issues of developmental structure, physical em- bodiment, integration of multiple sensory and motor systems, and social interaction, we have constructed an upper-torso humanoid robot called Cog. The robot has twenty-one degrees of freedom and a variety of sen- sory systems, including visual, auditory, vestibular, kinesthetic, and tac- tile senses. This chapter gives a background on the methodology that we have used in our investigations, highlights the research issues that have been raised during this project, and provides a summary of both the current state of the project and our long-term goals. We report on a variety of implemented visual-motor routines (smooth-pursuit track- ing, saccades, binocular vergence, and vestibular-ocular and opto-kinetic re?exes), orientation behaviors, motor control techniques, and social be- haviors (pointing to a visual target, recognizing joint attention through face and eye ?nding, imitation of head nods, and regulating interaction through expressive feedback). We further outline a number of areas for future research that will be necessary to build a complete embodied sys- tem
Bryson, Joanna J. (2006). The attentional spotlight (dennett and the cog project). Minds and Machines 16 (1):21-28.   (Google | More links)
Cardon, Alain (2006). Artificial consciousness, artificial emotions, and autonomous robots. Cognitive Processing 7 (4):245-267.   (Google | More links)
Chella, Antonio (2007). Towards robot conscious perception. In Antonio Chella & Riccardo Manzotti (eds.), Artificial Consciousness. Imprint Academic.   (Google)
Clancey, William (1995). How situated cognition is different from situated robotics. In Luc Steels & Rodney Brooks (eds.), The "Artificial Life" Route to "Artificial Intelligence": Building Situated Embodied Agents. Hillsdale, NJ: Lawrence Erlbaum Associates.   (Google)
Clark, Andy & Grush, Rick (1999). Towards a cognitive robotics. Adaptive Behavior 7 (1):5-16.   (Cited by 73 | Google | More links)
Abstract: There is a definite challenge in the air regarding the pivotal notion of internal representation. This challenge is explicit in, e.g., van Gelder, 1995; Beer, 1995; Thelen & Smith, 1994; Wheeler, 1994; and elsewhere. We think it is a challenge that can be met and that (importantly) can be met by arguing from within a general framework that accepts many of the basic premises of the work (in new robotics and in dynamical systems theory) that motivates such scepticism in the first place. Our strategy will be as follows. We begin (Section 1) by offering an account (an example and something close to a definition) of what we shall term Minimal Robust Representationalism (MRR). Sections 2 & 3 address some likely worries and questions about this notion. We end (Section 4) by making explicit the conditions under which, on our account, a science (e.g., robot- ics) may claim to be addressing cognitive phenomena
Dautenhahn, Kerstin; Ogden, Bernard; Quick, Tom & Ziemke, Tom (2002). From embodied to socially embedded agents: Implications for interaction-aware robots. Cognitive Systems Research 3 (1):397-427.   (Cited by 50 | Google | More links)
Dennett, Daniel C. (ms). Cog as a thought experiment.   (Cited by 3 | Google | More links)
Abstract: In her presentation at the Monte Verità workshop, Maja Mataric showed us a videotape of her robots cruising together through the lab, and remarked, aptly: "They're flocking, but that's not what they think they're doing." This is a vivid instance of a phenomenon that lies at the heart of all the research I learned about at Monte Verità: the execution of surprisingly successful "cognitive" behaviors by systems that did not explicitly represent, and did not need to explicitly represent, what they were doing. How "high" in the intuitive scale of cognitive sophistication can such unwitting prowess reach? All the way, apparently, since I want to echo Maja's observation with one of my own: "These roboticists are doing philosophy, but that's not what they think they're doing." It is possible, then, even to do philosophy--that most intellectual of activities--without realizing that that is what you are doing. It is even possible to do it well, for this is a good, new way of addressing antique philosophical puzzles
Dennett, Daniel C. (1995). Cog: Steps toward consciousness in robots. In Thomas Metzinger (ed.), Conscious Experience. Ferdinand Schoningh.   (Cited by 3 | Google)
Elton, Matthew (1997). Robots and rights: The ethical demands of artificial agents. Ends and Means 1 (2).   (Cited by 4 | Google)
Gips, James (1994). Toward the ethical robot. In Kenneth M. Ford, C. Glymour & Patrick Hayes (eds.), Android Epistemology. MIT Press.   (Cited by 19 | Google | More links)
Hesslow, Germund & Jirenhed, D-A. (2007). The inner world of a simple robot. Journal of Consciousness Studies 14 (7):85-96.   (Google | More links)
Abstract: The purpose of the paper is to discuss whether a particular robot can be said to have an 'inner world', something that can be taken to be a critical feature of consciousness. It has previously been argued that the mechanism underlying the appearance of an inner world in humans is an ability of our brains to simulate behaviour and perception. A robot has previously been designed in which perception can be simulated. A prima facie case can be made that this robot has an inner world in the same sense as humans. Various objections to this claim are discussed in the paper and it is concluded that the robot, although extremely simple, can easily be improved without adding any new principles, so that ascribing an inner world to it becomes intuitively reasonable
Holland, Owen & Goodman, Russell B. (2003). Robots with internal models: A route to machine consciousness? Journal of Consciousness Studies 10 (4):77-109.   (Cited by 20 | Google | More links)
Holland, Owen; Knight, Rob & Newcombe, Richard (2007). The role of the self process in embodied machine consciousness. In Antonio Chella & Riccardo Manzotti (eds.), Artificial Consciousness. Imprint Academic.   (Google)
Ishiguro, Hiroshi (2006). Android science: Conscious and subconscious recognition. Connection Science 18 (4):319-332.   (Cited by 14 | Google | More links)
Kitamura, T.; Tahara, T. & Asami, K. (2000). How can a robot have consciousness? Advanced Robotics 14:263-275.   (Cited by 6 | Google | More links)
Kitamura, T. (2002). What is the self of a robot? On a consciousness architecture for a mobile robot as a model of human consciousness. In Kunio Yasue, Marj Jibu & Tarcisio Della Senta (eds.), No Matter, Never Mind. John Benjamins.   (Google)
Korienek, Gene & Uzgalis, William L. (2002). Adaptable robots. Metaphilosophy 33 (1-2):83-97.   (Cited by 1 | Google)
Lacey, Nicola & Lee, M. (2003). The epistemological foundations of artificial agents. Minds and Machines 13 (3):339-365.   (Cited by 1 | Google | More links)
Abstract:   A situated agent is one which operates within an environment. In most cases, the environment in which the agent exists will be more complex than the agent itself. This means that an agent, human or artificial, which wishes to carry out non-trivial operations in its environment must use techniques which allow an unbounded world to be represented within a cognitively bounded agent. We present a brief description of some important theories within the fields of epistemology and metaphysics. We then discuss ways in which philosophical problems of scepticism are related to the problems faced by knowledge representation. We suggest that some of the methods that philosophers have developed to address the problems of epistemology may be relevant to the problems of representing knowledge within artificial agents
Menant, Christophe (2005). Information and meaning in life, humans and robots (2005). Proceedings of FIS2005 by MDPI, Basel, Switzerland.   (Google | More links)
Abstract: Information and meaning exist around us and within ourselves, and the same information can correspond to different meanings. This is true for humans and animals, and is becoming true for robots. We propose here an overview of this subject by using a systemic tool related to meaning generation that has already been published (C. Menant, Entropy 2003). The Meaning Generator System (MGS) is a system submitted to a constraint that generates a meaningful information when it receives an incident information that has a relation with the constraint. The content of the meaningful information is explicited, and its function is to trigger an action that will be used to satisfy the constraint of the system. The MGS has been introduced in the case of basic life submitted to a "stay alive" constraint. We propose here to see how the usage of the MGS can be extended to more complex living systems, to humans and to robots by introducing new types of constraints, and integrating the MGS into higher level systems. The application of the MGS to humans is partly based on a scenario relative to the evolution of body self-awareness toward self-consciousness that has already been presented (C. Menant, Biosemiotics 2003, and TSC 2004). The application of the MGS to robots is based on the definition of the MGS applied to robots functionality, taking into account the origins of the constraints. We conclude with a summary of this overview and with themes that can be linked to this systemic approach on meaning generation
Minsky, Marvin L. (1994). Will robots inherit the earth? Scientific American (Oct).   (Cited by 37 | Google | More links)
Abstract: Everyone wants wisdom and wealth. Nevertheless, our health often gives out before we achieve them. To lengthen our lives, and improve our minds, in the future we will need to change our our bodies and brains. To that end, we first must consider how normal Darwinian evolution brought us to where we are. Then we must imagine ways in which future replacements for worn body parts might solve most problems of failing health. We must then invent strategies to augment our brains and gain greater wisdom. Eventually we will entirely replace our brains -- using nanotechnology. Once delivered from the limitations of biology, we will be able to decide the length of our lives--with the option of immortality-- and choose among other, unimagined capabilities as well
Moravec, Hans (online). Bodies, robots, minds.   (Google)
Abstract: Serious attempts to build thinking machines began after the second world war. One line of research, called Cybernetics, used electronic circuitry imitating nervous systems to make machines that learned to recognize simple patterns, and turtle-like robots that found their way to recharging plugs. A different approach, named Artificial Intelligence, harnessed the arithmetic power of post-war computers to abstract reasoning, and by the 1960s made computers prove theorems in logic and geometry, solve calculus problems and play good games of checkers. At the end of the 1960s, research groups at MIT and Stanford attached television cameras and robot arms to their computers, so "thinking" programs could begin to collect information directly from the real world
Moravec, Hans (online). Robotics. Encyclopaedia Britannica Online.   (Google)
Abstract: the development of machines with motor, perceptual and cognitive skills once found only in animals and humans. The field parallels and has adopted developments from several areas, among them mechanization, automation and artificial intelligence, but adds its own gripping myth, of complete artificial mechanical human beings. Ancient images and figurines depicting animals and humans can be interpreted as steps towards this vision, as can mechanical automata from classical times on. The pace accelerated rapidly in the twentieth century with the development of electronic sensing and amplification that permitted automata to sense and react as well as merely perform. By the late twentieth century automata controlled by computers could also think and remember
Moravec, Hans (online). Robots inherit human minds.   (Google)
Abstract: Our first tools, sticks and stones, were very different from ourselves. But many tools now resemble us, in function or form, and they are beginning to have minds. A loose parallel with our own evolution suggests how they may develop in future. Computerless industrial machinery exhibits the behavioral flexibility of single-celled organisms. Today's best computer-controlled robots are like the simpler invertebrates. A thousand-fold increase in computer power in this decade should make possible machines with reptile-like sensory and motor competence. Growing computer power over the next half century will allow robots that learn like mammals, model their world like primates and eventually reason like humans. Depending on your point of view, humanity will then have produced a worthy successor, or transcended inherited limitations and transformed itself into something quite new. No longer limited by the slow pace of human learning and even slower biological evolution, intelligent machinery will conduct its affairs on an ever faster, ever smaller scale, until coarse physical nature has been converted to fine-grained purposeful thought
Moravec, Hans (1994). The age of robots. In Max More (ed.), Extro 1, Proceedings of the First Extropy Institute Conference on TransHumanist Thought. Extropy Institute.   (Cited by 2 | Google | More links)
Abstract: _Our artifacts are getting smarter, and a loose parallel with the evolution of animal intelligence suggests one future course_ _for them. Computerless industrial machinery exhibits the behavioral flexibility of single-celled organisms. Today's best_ _computer-controlled robots are like the simpler invertebrates. A thousand-fold increase in computer power in this decade_ _should make possible machines with reptile-like sensory and motor competence. Properly configured, such robots could_ _do in the physical world what personal computers now do in the world of data--act on our behalf as literal-minded_ _slaves. Growing computer power over the next half-century will allow this reptile stage will be surpassed, in stages_ _producing robots that learn like mammals, model their world like primates and eventually reason like humans._ _Depending on your point of view, humanity will then have produced a worthy successor, or transcended inherited_ _limitations and transformed itself into something quite new. No longer limited by the slow pace of human learning and_ _even slower biological evolution, intelligent machinery will conduct its affairs on an ever faster, ever smaller scale, until_ _coarse physical nature has been converted to fine-grained purposeful thought._
Parisi, Domenico (2007). Mental robotics. In Antonio Chella & Riccardo Manzotti (eds.), Artificial Consciousness. Imprint Academic.   (Google)
Petersen, Stephen (2007). The ethics of robot servitude. Journal of Experimental and Theoretical Artificial Intelligence 19 (1):43-54.   (Google | More links)
Abstract: Assume we could someday create artificial creatures with intelligence comparable to our own. Could it be ethical use them as unpaid labor? There is very little philosophical literature on this topic, but the consensus so far has been that such robot servitude would merely be a new form of slavery. Against this consensus I defend the permissibility of robot servitude, and in particular the controversial case of designing robots so that they want to serve (more or less particular) human ends. A typical objection to this case draws an analogy to the genetic engineering of humans: if designing eager robot servants is permissible, it should also be permissible to design eager human servants. Few ethical views can easily explain even the wrongness of such human engineering, however, and those few explanations that are available break the analogy with engineering robots. The case turns out to be illustrative of profound problems in the field of population ethics.
Schmidt, C. T. A. & Kraemer, F. (2006). Robots, Dennett and the autonomous: A terminological investigation. Minds and Machines 16 (1):73-80.   (Cited by 5 | Google | More links)
Abstract: In the present enterprise we take a look at the meaning of Autonomy, how the word has been employed and some of the consequences of its use in the sciences of the artificial. Could and should robots really be autonomous entities? Over and beyond this, we use concepts from the philosophy of mind to spur on enquiry into the very essence of human autonomy. We believe our initiative, as does Dennett's life-long research, sheds light upon the problems of robot design with respect to their relation with humans
Torrance, Steve (1994). The mentality of robots, II. Proceedings of the Aristotelian Society 68 (68):229-262.   (Google)
Young, R. A. (1994). The mentality of robots, I. Proceedings of the Aristotelian Society 68 (68):199-227.   (Google)
Ziemke, Tom (2007). What's life got to do with it? In Antonio Chella & Riccardo Manzotti (eds.), Artificial Consciousness. Imprint Academic.   (Google)