Javascript Menu by Deluxe-Menu.com
MindPapers is now part of PhilPapers: online research in philosophy, a new service with many more features.
 
 Compiled by David Chalmers (Editor) & David Bourget (Assistant Editor), Australian National University. Submit an entry.
 
   
click here for help on how to search

6.6. Philosophy of AI, Miscellaneous (Philosophy of AI, Miscellaneous on PhilPapers)

Akman, Varol (2000). Introduction to the special issue on philosophical foundations of artificial intelligence. Journal of Experimental and Theoretical Artificial Intelligence 12 (3):247-250.   (Cited by 2 | Google | More links)
Abstract: This is the guest editor's introduction to a JETAI special issue on philosophical foundations of AI
Alai, Mario (2004). A.I., Scientific discovery and realism. Minds and Machines 14 (1).   (Cited by 2 | Google | More links)
Abstract: Epistemologists have debated at length whether scientific discovery is a rational and logical process. If it is, according to the Artificial Intelligence hypothesis, it should be possible to write computer programs able to discover laws or theories; and if such programs were written, this would definitely prove the existence of a logic of discovery. Attempts in this direction, however, have been unsuccessful: the programs written by Simon's group, indeed, infer famous laws of physics and chemistry; but having found no new law, they cannot properly be considered discovery machines. The programs written in the Turing tradition, instead, produced new and useful empirical generalization, but no theoretical discovery, thus failing to prove the logical character of the most significant kind of discoveries. A new cognitivist and connectionist approach by Holland, Holyoak, Nisbett and Thagard, looks more promising. Reflection on their proposals helps to understand the complex character of discovery processes, the abandonment of belief in the logic of discovery by logical positivists, and the necessity of a realist interpretation of scientific research
Apter, Michael J. (1970). The Computer Simulation Of Behaviour. Hutchinson.   (Cited by 18 | Google)
Barbour, Ian G. (1999). Neuroscience, artificial intelligence, and human nature: Theological and philosophical reflections. In Neuroscience and the Person: Scientific Perspectives on Divine Action. Notre Dame: University Notre Dame Press.   (Cited by 7 | Google | More links)
Baum, Eric B. (2004). What Is Thought? Cambridge MA: Bradford Book/MIT Press.   (Cited by 33 | Google | More links)
Beavers, Anthony F. (2002). Phenomenology and artificial intelligence. Metaphilosophy 33 (1-2):70-82.   (Cited by 6 | Google | More links)
Abstract: In CyberPhilosophy: The Intersection of Philosophy and Computing, edited by James H. Moor and Terrell Ward Bynum (Oxford, UK: Blackwell, 2002), 66-77. Also in Metaphilosophy 33.1/2 (2002): 70-82
Bergadano, F. (1993). Machine learning and the foundations of inductive inference. Minds and Machines 3 (1):31-51.   (Google | More links)
Abstract:   The problem of valid induction could be stated as follows: are we justified in accepting a given hypothesis on the basis of observations that frequently confirm it? The present paper argues that this question is relevant for the understanding of Machine Learning, but insufficient. Recent research in inductive reasoning has prompted another, more fundamental question: there is not just one given rule to be tested, there are a large number of possible rules, and many of these are somehow confirmed by the data — how are we to restrict the space of inductive hypotheses and choose effectively some rules that will probably perform well on future examples? We analyze if and how this problem is approached in standard accounts of induction and show the difficulties that are present. Finally, we suggest that the explanation-based learning approach and related methods of knowledge intensive induction could be, if not a solution, at least a tool for solving some of these problems
Boden, Margaret A. (1978). Artificial intelligence and Piagetian theory. Synthese 38 (July):389-414.   (Cited by 6 | Google | More links)
Boden, Margaret A. (1989). Artificial Intelligence In Psychology: Interdisciplinary Essays. Cambridge: Mit Press.   (Cited by 15 | Google)
Boden, Margaret A. (1973). How artificial is artificial intelligence? British Journal for the Philosophy of Science 24 (1).   (Google)
Born, Rainer P. (ed.) (1987). Artificial Intelligence: The Case Against. St Martin's Press.   (Cited by 13 | Google)
Bostrom, Nick (1998). How long before superintelligence? International Journal of Futures Studies 2.   (Cited by 22 | Google)
Abstract: _This paper outlines the case for believing that we will have superhuman artificial intelligence_ _within the first third of the next century. It looks at different estimates of the processing power of_ _the human brain; how long it will take until computer hardware achieve a similar performance;_ _ways of creating the software through bottom-up approaches like the one used by biological_ _brains; how difficult it will be for neuroscience figure out enough about how brains work to_ _make this approach work; and how fast we can expect superintelligence to be developed once_ _there is human-level artificial intelligence._
Bostrom, Nick (online). The transhumanist FAQ.   (Cited by 14 | Google | More links)
Brooks, Rodney (2001). The relationship between matter and life. Nature 409 (6818):409-411.   (Cited by 65 | Google | More links)
Abstract: Researchers in artificial intelligence (AI) Moore’s law states that computational complexity of the models is still far below that and artificial life (Alife) are interested resources for a fixed price roughly double of any living system. New experiments in evo- in understanding the properties of liv- every 18 months. From about 1975 into the lution simulate spatially isolated populations ing organisms so that they can build artificial early 1990s all the gains of Moore’s law went to investigate speciation. Over the past few systems that exhibit these properties for into the changeover from the centralized years, new directions have emerged in AI5, in useful purposes. AI researchers are interest- mainframe to the individual computer on attempts to implement artificial creatures in ed mostly in perception, cognition and your desk, accommodating a vastly simulated or physical environments. generation of action (Box 1), whereas Alife increased number of users. The amount of Often called the behaviour-based focuses on evolution, reproduction, computing power available to the individual approach, this new mode of thought involves morphogenesis and metabolism (Box 2). scientist did not change that much, although the connection of perception to action with Neither of these disciplines is a conventional the price came down by a factor of a little in the way of intervening representa- science; rather, they are a mixture of science thousand. But since the early 1990s, all of tional systems. Rather than relying on and engineering. Despite, or perhaps Moore’s law has gone into increasing the per- search, this approach relies on the correct because of, this hybrid structure, both disci- formance of the workstation itself. short, fast connections being present plines have been very successful and our And both AI and Alife have benefited from between sensory and motor modules. world is full of their products. this shift. Behaviour-based approaches began with Every time we use a computer we use Increased computer power has enabled insect models, but more recently they have algorithms and techniques developed by AI search-based AI to push ahead with been extended to humanoid robots6 — researchers.
Button, Graham; Coulter, Jeff; Lee, John R. E. & Sharrock, Wes (1995). Computers, Minds, and Conduct. Polity Press.   (Cited by 54 | Google)
Clark, Andy (2002). Artificial intelligence. In Stephen P. Stich & Ted A. Warfield (eds.), Blackwell Guide to Philosophy of Mind. Blackwell.   (Google)
Clark, Andy (2003). Artificial intelligence and the many faces of reason. In Stephen P. Stich & Ted A. Warfield (eds.), The Blackwell Guide to Philosophy of Mind. Blackwell.   (Cited by 3 | Google | More links)
Abstract: wide variety of things. It covers the capacity to carry out deductive inferences, to make
Copeland, B. Jack (1995). Artificial Intelligence: A Philosophical Introduction. Cambridge: Blackwell.   (Cited by 77 | Google | More links)
Cordeschi, Roberto (2007). AI turns fifty: Revisiting its origins. Applied Artificial Intelligence 21:259-279.   (Cited by 1 | Google | More links)
Abstract: Applied Artificial Intelligence, 21, 2007, pp. 259-279
Cordeschi, Roberto (2006). Searching in a Maze, in search of knowledge: Issues in early artificial intelligence. In O. Stock & M. Schaerf (eds.), Lecture Notes In Computer Science. Springer-Verlag.   (Google | More links)
Abstract: Lecture Notes in Artificial Intelligence, vol. 4155, Springer, Berlin-Heidelberg, 2006, pp. 1-23. PDF
Crosson, Frederick J. (ed.) (1967). Philosophy And Cybernetics. Notre Dame: University of Notre Dame Press.   (Cited by 9 | Google)
Culbertson, James T. (1963). The Minds Of Robots: Sense Data, Memory Images, And Behavior In Conscious Automata. Urbana: University Of Illinois Press.   (Cited by 11 | Google)
Cummins, Robert E. (ed.) (1991). Philosophy and AI. Cambridge: MIT Press.   (Cited by 6 | Google)
Dahlbom, B. (1995). Mind is artificial. In B. Dahlbom (ed.), Dennett and His Critics. Cambridge: Blackwell.   (Cited by 7 | Google)
Dreyfus, Hubert L. (1985). From socrates to expert systems: The limits and dangers of calculative rationality. In Carl Mitcham & Alois Huning (eds.), Philosophy and Technology II: Information Technology and Computers in Theory and Practice. Reidel.   (Cited by 26 | Google | More links)
Abstract: Actual AI research began auspiciously around 1955 with Allen Newell and Herbert Simon's work at the RAND Corporation. Newell and Simon proved that computers could do more than calculate. They demonstrated that computers were physical symbol systems whose symbols could be made to stand for anything, including features of the real world, and whose programs could be used as rules for relating these features. In this way computers could be used to simulate certain important aspects intelligence. Thus the information-processing model of the mind was born. But, looking back over these fifty years, it seems that theoretical AI with its promise of a robot like HAL appears to be a perfect example of what Imre Lakatos has called a "degenerating research program"
Drescher, Gary L. (1991). Made-Up Minds: A Constructivist Approach to Artificial Intelligence. Cambridge: MIT Press.   (Cited by 244 | Google | More links)
Dresher, B. Elan & Hornstein, Norbert (1976). On some supposed contributions of artificial intelligence to the scientific study of language. Cognition 4 (December):321-398.   (Cited by 12 | Google)
Duch, Włodzisław (2007). What is computational intelligence and where is it going? In Wlodzislaw Duch & Jacek Mandziuk (eds.), Challenges for Computational Intelligence. Springer.   (Google | More links)
Abstract: What is Computational Intelligence (CI) and what are its relations with Artificial Intelligence (AI)? A brief survey of the scope of CI journals and books with ``computational intelligence'' in their title shows that at present it is an umbrella for three core technologies (neural, fuzzy and evolutionary), their applications, and selected fashionable pattern recognition methods. At present CI has no comprehensive foundations and is more a bag of tricks than a solid branch of science. The change of focus from methods to challenging problems is advocated, with CI defined as a part of computer and engineering sciences devoted to solution of non-algoritmizable problems. In this view AI is a part of CI focused on problems related to higher cognitive functions, while the rest of the CI community works on problems related to perception and control, or lower cognitive functions. Grand challenges on both sides of this spectrum are addressed
Epstein, Susan L. (1992). The role of memory and concepts in learning. Minds and Machines 2 (3).   (Cited by 10 | Google | More links)
Abstract: The extent to which concepts, memory, and planning are necessary to the simulation of intelligent behavior is a fundamental philosophical issue in Artificial Intelligence. An active and productive segement of the AI community has taken the position that multiple low-level agents, properly organized, can account for high-level behavior. Empirical research on these questions with fully operational systems has been restricted to mobile robots that do simple tasks. This paper recounts experiments with Hoyle, a system in a cerebral, rather than a physical, domain. The program learns to perform well and quickly, often outpacing its human creators at two-person, perfect information board games. Hoyle demonstrates that a surprising amount of intelligent behavior can be treated as if it were situation-determined, that often planning is unnecessary, and that the memory required to support this learning is minimal. Concepts, however, are crucial to this reactive program's ability to learn and perform
Fetzer, James H. (1990). Artificial Intelligence: Its Scope and Limits. Kluwer.   (Cited by 35 | Google | More links)
Franchi, Stefano & Guzeldere, Guven (1995). Constructions of the Mind: Artificial Intelligence and the Humanities. Stanford Humanities Review.   (Google)
Froese, Tom (2007). On the role of AI in the ongoing paradigm shift within the cognitive sciences. In M. Lungarella (ed.), 50 Years of AI. Springer-Verlag.   (Google | More links)
Abstract: This paper supports the view that the ongoing shift from orthodox to embodied-embedded cognitive science has been significantly influenced by the experimental results generated by AI research. Recently, there has also been a noticeable shift toward enactivism, a paradigm which radicalizes the embodied-embedded approach by placing autonomous agency and lived subjectivity at the heart of cognitive science. Some first steps toward a clarification of the relationship of AI to this further shift are outlined. It is concluded that the success of enactivism in establishing itself as a mainstream cognitive science research program will depend less on progress made in AI research and more on the development of a phenomenological pragmatics
Hall, John Storrs (forthcoming). Self-improving AI: An analysis. Minds and Machines.   (Google)
Abstract: Self-improvement was one of the aspects of AI proposed for study in the 1956 Dartmouth conference. Turing proposed a “child machine” which could be taught in the human manner to attain adult human-level intelligence. In latter days, the contention that an AI system could be built to learn and improve itself indefinitely has acquired the label of the bootstrap fallacy. Attempts in AI to implement such a system have met with consistent failure for half a century. Technological optimists, however, have maintained that a such system is possible, producing, if implemented, a feedback loop that would lead to a rapid exponential increase in intelligence. We examine the arguments for both positions and draw some conclusions
Haugeland, John (1985). Artificial Intelligence: The Very Idea. Cambridge: Mit Press.   (Cited by 404 | Google | More links)
Abstract: The idea that human thinking and machine computing are "radically the same" provides the central theme for this marvelously lucid and witty book on...
Haugeland, John (ed.) (1981). Mind Design. MIT Press.   (Cited by 122 | Annotation | Google)
Haugeland, John (ed.) (1997). Mind Design II: Philosophy, Psychology, Artificial Intelligence. Cambridge: MIT Press.   (Cited by 12 | Google | More links)
Abstract: Contributors: Rodney A. Brooks, Paul M. Churchland, Andy Clark, Daniel C. Dennett, Hubert L. Dreyfus, Jerry A. Fodor, Joseph Garon, John Haugeland, Marvin...
Hayes, Patrick J.; Ford, Kenneth M. & Adams-Webber, J. R. (1994). Human reasoning about artificial intelligence. Journal of Experimental and Theoretical Artificial Intelligence 4:247-63.   (Cited by 5 | Google | More links)
Hookway, Christopher (ed.) (1984). Minds, Machines And Evolution. Cambridge: Cambridge University Press.   (Cited by 11 | Google)
Abstract: This is a volume of original essays written by philosophers and scientists and dealing with philosophical questions arising from work in evolutionary biology and artificial intelligence. In recent years both of these areas have been the focus for attempts to provide a scientific, model of a wide range of human capacities - most prominently perhaps in sociobiology and cognitive psychology. The book therefore examines a number of issues related to the search for a 'naturalistic' or scientific account of human experience and behaviour. Some of the essays deal with the application of such models to particular behaviour, stressing the problems raised by consciousness, and the information to be derived from the differing capacities of animals and people; others consider more general questions about the logic of the explanations provided by these kinds of approach. The volume continues the informal series stemming from meetings sponsored by the Thyssen Foundation
Jaki, Stanley L. (1969). Brain, Mind And Computers. Herder & Herder.   (Cited by 13 | Google)
Keeley, Brian L. (1994). Against the global replacement: On the application of the philosophy of artificial intelligence to artificial life. In C.G. Langton (ed.), Artificial Life III: Proceedings of the Workshop on Artificial Life. Reading, Mass: Addison-Wesley.   (Cited by 11 | Google)
Krellenstein, Marc F. (1987). A reply to parallel computation and the mind-body problem. Cognitive Science 11:155-7.   (Cited by 3 | Annotation | Google)
McDermott, Drew (1997). How intelligent is deep blue? New York Times (May) 14.   (Cited by 3 | Google)
Minsky, Marvin L. (1986). The Society Of Mind. Simon & Schuster.   (Cited by 2409 | Google | More links)
Moor, James H. (1998). Assessing artificial intelligence and its critics. In T.W. Bynum & Moor. J. (eds.), The Digital Phoenix. Cambridge: Blackwell.   (Cited by 3 | Google)
Moody, Todd C. (1993). Philosophy and Artificial Intelligence. Prentice-Hall.   (Cited by 12 | Google)
Neumaier, Otto (1987). A Wittgensteinian view of artificial intelligence. In Artificial Intelligence. St Martin's Press.   (Cited by 1 | Google)
Pollock, John (online). Oscar: A cognitive architecture for intelligent agents.   (Google | More links)
Abstract: The “grand problem” of AI has always been to build artificial agents of human-level intelligence, capable of operating in environments of real-world complexity. OSCAR is a cognitive architecture for such agents, implemented in LISP. OSCAR is based on my extensive work in philosophy concerning both epistemology and rational decision making. This paper provides a detailed overview of OSCAR. The main conclusions are that such agents must be capablew of operating against a background of pervasive ignorance, because the real world is too complex for them to know more than a small fraction of what is true. This is handled by giving the agent the power to reason defeasibily. The OSCAR system of defeasible reasoning is sketched. It is argued that if epistemic cognition must be defeasible, planning must also be done defeasibly, and the best way to do that is to reason defeasibly about plans. A sketch is given about how this might work
Pollock, John L. (1990). Philosophy and artificial intelligence. Philosophical Perspectives 4:461-498.   (Google | More links)
Pollock, John L. (1999). Rational cognition in Oscar. Agent Theories.   (Cited by 7 | Google | More links)
Abstract: Stuart Russell [14] describes rational agents as --œthose that do the right thing--�. The problem of designing a rational agent then becomes the problem of figuring out what the right thing is. There are two approaches to the latter problem, depending upon the kind of agent we want to build. On the one hand, anthropomorphic agents are those that can help human beings rather directly in their intellectual endeavors. These endeavors consist of decision making and data processing. An agent that can help humans in these enterprises must make decisions and draw conclusions that are rational by human standards of rationality. Anthropomorphic agents can be contrasted with goal-oriented agents --” those that can carry out certain narrowly-defined tasks in the world. Here the objective is to get the job done, and it makes little difference how the agent achieves its design goal
Pollock, John L. (2000). Rationality in philosophy and artificial intelligence. In The Proceedings of the Twentieth World Congress of Philosophy, Volume 9: Philosophy of Mind. Charlottesville: Philosophy Doc Ctr.   (Google)
Pollock, John L. (online). The Oscar project.   (Google)
Preston, Beth (1991). Anthropocentrism, and the evolution of 'intelligence'. Minds and Machines 1 (3):259-277.   (Cited by 3 | Google | More links)
Abstract:   Intuitive conceptions guide practice, but practice reciprocally reshapes intuition. The intuitive conception of intelligence in AI was originally highly anthropocentric. However, the internal dynamics of AI research have resulted in a divergence from anthropocentric concerns. In particular, the increasing emphasis on commonsense knowledge and peripheral intelligence (perception and movement) in effect constitutes an incipient reorientation of intuitions about the nature of intelligence in a non-anthropocentric direction. I argue that this conceptual shift undermines Joseph Weizenbaum's claim that the project of artificial intelligence is inherently dehumanizing
Puccetti, Roland (1974). Pattern recognition in computers and the human brain:: With special application to chess playing machines. British Journal for the Philosophy of Science 25 (2):137-154.   (Cited by 7 | Google | More links)
Abstract: 1 Matching Templates and Feature Analysers. 2 Modes of Perception in Left and Right Cerebral Hemispheres. 3 Identification and Recognition. 4 Chess Plying Machines
Robinson, William S. (1992). Computers, Minds, and Robots. Temple University Press.   (Cited by 8 | Google)
Russell, S. (1991). Inductive learning by machines. Philosophical Studies 64 (October):37-64.   (Cited by 6 | Annotation | Google | More links)
Rychlak, Joseph F. (1991). Artificial Intelligence and Human Reason: A Teleological Critique. Columbia University Press.   (Cited by 13 | Google)
Schiaffonati, Viola (2003). A framework for the foundation of the philosophy of artificial intelligence. Minds and Machines 13 (4):537-552.   (Google | More links)
Abstract:   The peculiarity of the relationship between philosophy and Artificial Intelligence (AI) has been evidenced since the advent of AI. This paper aims to put the basis of an extended and well founded philosophy of AI: it delineates a multi-layered general framework to which different contributions in the field may be traced back. The core point is to underline how in the same scenario both the role of philosophy on AI and role of AI on philosophy must be considered. Moreover, this framework is revised and extended in the light of the consideration of a type of multiagent system devoted to afford the issue of scientific discovery both from a conceptual and from a practical point of view
Simon, Herbert A. (1995). Machine as mind. In Android Epistemology. Cambridge: MIT Press.   (Cited by 9 | Google | More links)
Sloman, Aaron (1978). The Computer Revolution in Philosophy: Philosophy Science and Models of Mind. Harvester.   (Cited by 87 | Annotation | Google | More links)
Abstract: Since 1991 the author has been Professor of Artificial Intelligence and Cognitive Science in the School of Computer Science at the University of Birmingham, UK
Sloman, Aaron (2002). The irrelevance of Turing machines to artificial intelligence. In Matthias Scheutz (ed.), Computationalism: New Directions. MIT Press.   (Cited by 18 | Google | More links)
Sluckin, W. (1954). Minds And Machines. London,: Penguin,.   (Cited by 16 | Google)
Sparrow, Robert (2002). The March of the robot dogs. Ethics and Information Technology 4 (4):305-318.   (Cited by 3 | Google | More links)
Abstract: The Centre for Applied Philosophy and Public Ethics (CAPPE) was established in 2000 as a Special Research Centre in applied philosophy funded by the Australian Research Council. It has combined the complementary strengths of two existing centres specialising in applied philosophy, namely the Centre for Philosophy and Public Issues (CPPI) at the University of Melbourne and the Centre for Professional and Applied Ethics at Charles Sturt University. It operates as a unified centre with two divisions: in Melbourne at the University of Melbourne and in Canberra at Charles Sturt University. The Director of CAPPE and the head of the Canberra node is Professor Seumas Miller. Professor C.A.J. (Tony) Coady is the Deputy Director of CAPPE and the head of the Melbourne node
Storrs Hall, J. (2006). Nano-enabled AI: Some philosophical issues. International Journal of Applied Philosophy 20 (2):247-261.   (Google)
Thagard, Paul R. (1991). Philosophical and computational models of explanation. Philosophical Studies 64 (October):87-104.   (Cited by 4 | Annotation | Google | More links)
Thagard, Paul R. (1990). Philosophy and machine learning. Canadian Journal of Philosophy 20 (2):261-76.   (Cited by 2 | Google)
Thagard, Paul R. (1986). Parallel computation and the mind-body problem. Cognitive Science 10:301-18.   (Cited by 28 | Annotation | Google | More links)
Thórisson, Kristinn R. (2007). Integrated A.I. Systems. Minds and Machines 17 (1).   (Google | More links)
Abstract: The broad range of capabilities exhibited by humans and animals is achieved through a large set of heterogeneous, tightly integrated cognitive mechanisms. To move artificial systems closer to such general-purpose intelligence we cannot avoid replicating some subset—quite possibly a substantial portion—of this large set. Progress in this direction requires that systems integration be taken more seriously as a fundamental research problem. In this paper I make the argument that intelligence must be studied holistically. I present key issues that must be addressed in the area of integration and propose solutions for speeding up rate of progress towards more powerful, integrated A.I. systems, including (a) tools for building large, complex architectures, (b) a design methodology for building realtime A.I. systems and (c) methods for facilitating code sharing at the community level
Torrance, Steven (ed.) (1984). The Mind And The Machine: Philosophical Aspects Of Artificial Intelligence. Chichester: Horwood.   (Cited by 12 | Google)
Hauser, Larry (online). Artificial intelligence. Internet Encyclopedia of Philosophy.   (Google)
van Gelder, Tim (1998). Into the deep blue yonder. Quadrant 42:33-39.   (Google)
Vinge, Vernor (online). The technological singularity.   (Cited by 43 | Google | More links)
von Neumann, John (1958). The Computer And The Brain. New Haven: Yale University Press.   (Cited by 404 | Google | More links)
Wagman, Morton (1991). Artificial Intelligence and Human Cognition. New York: Praeger.   (Cited by 7 | Google)
Warnick, Barbara (2004). Rehabilitating AI: Argument loci and the case for artificial intelligence. Argumentation 18 (2):149-170.   (Google | More links)
Winograd, Terry & Flores, Fernando (1987). Understanding Computers and Cognition. Addison-Wesley.   (Cited by 3155 | Google | More links)
Yudkowsky, Eliezer (online). Creating friendly AI.   (Cited by 5 | Google)
Yudkowsky, Eliezer (online). Staring into the singularity.   (Google)
Abstract: 1: The End of History 2: The Beyondness of the Singularity 2.1: The Definition of Smartness 2.2: Perceptual Transcends 2.3: Great Big Numbers 2.4: Smarter Than We Are 3: Sooner Than You Think 4: Uploading 5: The Interim Meaning of Life 6: Getting to the Singularity

6.6a Philosophy of AI, General Works

6.6b Philosophy of AI, Misc

Adam, Alison (2000). Deleting the subject: A feminist reading of epistemology in artificial intelligence. Minds and Machines 10 (2).   (Google)
Abstract:   This paper argues that AI follows classical versions of epistemology in assuming that the identity of the knowing subject is not important. In other words this serves to `delete the subject''. This disguises an implicit hierarchy of knowers involved in the representation of knowledge in AI which privileges the perspective of those who design and build the systems over alternative perspectives. The privileged position reflects Western, professional masculinity. Alternative perspectives, denied a voice, belong to less powerful groups including women. Feminist epistemology can be used to approach this from new directions, in particular, to show how women''s knowledge may be left out of consideration by AI''s focus on masculine subjects. The paper uncovers the tacitly assumed Western professional male subjects in two flagship AI systems, Cyc and Soar
Kirsh, David (1995). The intelligent use of space. Artificial Intelligence 73:31-68.   (Google)
Abstract: The objective of this essay is to provide the beginning of a principled classification of some of the ways space is intelligently used. Studies of planning have typically focused on the temporal ordering of action, leaving as unaddressed questions of where to lay down instruments, ingredients, work-in-progress, and the like. But, in having a body, we are spatially located creatures: we must always be facing some direction, have only certain objects in view, be within reach of certain others. How we manage the spatial arrangement of items around us is not an afterthought: it is an integral part of the way we think, plan, and behave. The proposed classification has three main categories: spatial arrangements that simplify choice; spatial arrangements that simplify perception; and spatial dynamics that simplify internal computation. The data for such a classification is drawn from videos of cooking, assembly and packing, everyday observations in supermarkets, workshops and playrooms, and experimental studies of subjects playing Tetris, the computer game. This study, therefore, focuses on interactive processes in the medium and short term: on how agents set up their workplace for particular tasks, and how they continuously manage that workplace.
Muntean, Ioan & Wright, Cory D. (2007). Autonomy, allostasic mechanisms, and AI: a biomimetic perspective. Pragmatics and Cognition 15:489–513.   (Google)
Abstract: We argue that the concepts of mechanism and autonomy appear to be antagonistic when autonomy is conflated with agency. Once these concepts are disentangled, it becomes clearer how autonomy emerges from complex forms of control. Subsequently, current biomimetic strategies tend to focus on homeostatic regulatory systems; we propose that research in AI and robotics would do well to incorporate biomimetic strategies that instead invoke models of allostatic mechanisms as a way of understanding how to enhance autonomy in artificial systems.
Penco, Carlo (online). Expressing the Background. Icelandic Philosophical Association (talks).   (Google)
Silva, Porfirio & U. Lima, Pedro (2007). Institutional Robotics. In F. Almeida e Costa et al (ed.), Advances in Artificial Life. ECAL 2007. Springer-Verlag.   (Google)
Abstract: Pioneer approaches to Artificial Intelligence have traditionally neglected, in a chronological sequence, the agent body, the world where the agent is situated, and the other agents. With the advent of Collective Robotics approaches, important progresses were made toward embodying and situating the agents, together with the introduction of collective intelligence. However, the currently used models of social environments are still rather poor, jeopardizing the attempts of developing truly intelligent robot teams. In this paper, we propose a roadmap for a new approach to the design of multi-robot systems, mainly inspired by concepts from Institutional Economics, an alternative to mainstream neoclassical economic theory. Our approach intends to sophisticate the design of robot collectives by adding, to the currently popular emergentist view, the concepts of physically and socially bounded autonomy of cognitive agents, uncoupled interaction among them and deliberately set up coordination devices.