This mode searches for entries containing all the entered words in their title, author, date, comment field, or in any of many other fields showing on OPC pages.
This mode searches for entries containing the text string you entered in their author field. Note that the database does not have first names for all authors, so it is preferable to search only by surnames. If you search for a full name or a name with an initial, enter it in the format used internally, namely the "Lastname, Firstname" or "Lastname, F." format.
This mode differs from the all fields mode in two respects. First, some information not publicly available on the site is searched, e.g., abstracts and excerpts gathered by the crawler, which are not always accurate but can help broaden one's search. Second, you may prefix any term with a '+' or '-' to narrow the search to entries containing it or not containing it, respectively. Terms which are not prefixed by a '+' are not mandatory. Instead, they are weighed depending on their frequency in order to determine the best search results. You may also search for a literal string composed of several words by putting them in double quotation marks (").
Note that short and / or common words are ignored by the search engine.
Try PhilPapers to find published items which are available on a subscription basis.
Abstract: Epistemologists have debated at length whether scientific discovery is a rational and logical process. If it is, according to the Artificial Intelligence hypothesis, it should be possible to write computer programs able to discover laws or theories; and if such programs were written, this would definitely prove the existence of a logic of discovery. Attempts in this direction, however, have been unsuccessful: the programs written by Simon's group, indeed, infer famous laws of physics and chemistry; but having found no new law, they cannot properly be considered discovery machines. The programs written in the Turing tradition, instead, produced new and useful empirical generalization, but no theoretical discovery, thus failing to prove the logical character of the most significant kind of discoveries. A new cognitivist and connectionist approach by Holland, Holyoak, Nisbett and Thagard, looks more promising. Reflection on their proposals helps to understand the complex character of discovery processes, the abandonment of belief in the logic of discovery by logical positivists, and the necessity of a realist interpretation of scientific research
Abstract: In CyberPhilosophy: The Intersection of Philosophy and Computing, edited by James H. Moor and Terrell Ward Bynum (Oxford, UK: Blackwell, 2002), 66-77. Also in Metaphilosophy 33.1/2 (2002): 70-82
Abstract: _This paper outlines the case for believing that we will have superhuman artificial intelligence_ _within the first third of the next century. It looks at different estimates of the processing power of_ _the human brain; how long it will take until computer hardware achieve a similar performance;_ _ways of creating the software through bottom-up approaches like the one used by biological_ _brains; how difficult it will be for neuroscience figure out enough about how brains work to_ _make this approach work; and how fast we can expect superintelligence to be developed once_ _there is human-level artificial intelligence._
Abstract: Researchers in artificial intelligence (AI) Moore’s law states that computational complexity of the models is still far below that and artificial life (Alife) are interested resources for a fixed price roughly double of any living system. New experiments in evo- in understanding the properties of liv- every 18 months. From about 1975 into the lution simulate spatially isolated populations ing organisms so that they can build artificial early 1990s all the gains of Moore’s law went to investigate speciation. Over the past few systems that exhibit these properties for into the changeover from the centralized years, new directions have emerged in AI5, in useful purposes. AI researchers are interest- mainframe to the individual computer on attempts to implement artificial creatures in ed mostly in perception, cognition and your desk, accommodating a vastly simulated or physical environments. generation of action (Box 1), whereas Alife increased number of users. The amount of Often called the behaviour-based focuses on evolution, reproduction, computing power available to the individual approach, this new mode of thought involves morphogenesis and metabolism (Box 2). scientist did not change that much, although the connection of perception to action with Neither of these disciplines is a conventional the price came down by a factor of a little in the way of intervening representa- science; rather, they are a mixture of science thousand. But since the early 1990s, all of tional systems. Rather than relying on and engineering. Despite, or perhaps Moore’s law has gone into increasing the per- search, this approach relies on the correct because of, this hybrid structure, both disci- formance of the workstation itself. short, fast connections being present plines have been very successful and our And both AI and Alife have benefited from between sensory and motor modules. world is full of their products. this shift. Behaviour-based approaches began with Every time we use a computer we use Increased computer power has enabled insect models, but more recently they have algorithms and techniques developed by AI search-based AI to push ahead with been extended to humanoid robots6 — researchers.
Abstract: Actual AI research began auspiciously around 1955 with Allen Newell and Herbert Simon's work at the RAND Corporation. Newell and Simon proved that computers could do more than calculate. They demonstrated that computers were physical symbol systems whose symbols could be made to stand for anything, including features of the real world, and whose programs could be used as rules for relating these features. In this way computers could be used to simulate certain important aspects intelligence. Thus the information-processing model of the mind was born. But, looking back over these fifty years, it seems that theoretical AI with its promise of a robot like HAL appears to be a perfect example of what Imre Lakatos has called a "degenerating research program"
Abstract: What is Computational Intelligence (CI) and what are its relations with Artificial Intelligence (AI)? A brief survey of the scope of CI journals and books with ``computational intelligence'' in their title shows that at present it is an umbrella for three core technologies (neural, fuzzy and evolutionary), their applications, and selected fashionable pattern recognition methods. At present CI has no comprehensive foundations and is more a bag of tricks than a solid branch of science. The change of focus from methods to challenging problems is advocated, with CI defined as a part of computer and engineering sciences devoted to solution of non-algoritmizable problems. In this view AI is a part of CI focused on problems related to higher cognitive functions, while the rest of the CI community works on problems related to perception and control, or lower cognitive functions. Grand challenges on both sides of this spectrum are addressed
Abstract: The extent to which concepts, memory, and planning are necessary to the simulation of intelligent behavior is a fundamental philosophical issue in Artificial Intelligence. An active and productive segement of the AI community has taken the position that multiple low-level agents, properly organized, can account for high-level behavior. Empirical research on these questions with fully operational systems has been restricted to mobile robots that do simple tasks. This paper recounts experiments with Hoyle, a system in a cerebral, rather than a physical, domain. The program learns to perform well and quickly, often outpacing its human creators at two-person, perfect information board games. Hoyle demonstrates that a surprising amount of intelligent behavior can be treated as if it were situation-determined, that often planning is unnecessary, and that the memory required to support this learning is minimal. Concepts, however, are crucial to this reactive program's ability to learn and perform
Abstract: This paper supports the view that the ongoing shift from orthodox to embodied-embedded cognitive science has been significantly influenced by the experimental results generated by AI research. Recently, there has also been a noticeable shift toward enactivism, a paradigm which radicalizes the embodied-embedded approach by placing autonomous agency and lived subjectivity at the heart of cognitive science. Some first steps toward a clarification of the relationship of AI to this further shift are outlined. It is concluded that the success of enactivism in establishing itself as a mainstream cognitive science research program will depend less on progress made in AI research and more on the development of a phenomenological pragmatics
Abstract: The “grand problem” of AI has always been to build artificial agents of human-level intelligence, capable of operating in environments of real-world complexity. OSCAR is a cognitive architecture for such agents, implemented in LISP. OSCAR is based on my extensive work in philosophy concerning both epistemology and rational decision making. This paper provides a detailed overview of OSCAR. The main conclusions are that such agents must be capablew of operating against a background of pervasive ignorance, because the real world is too complex for them to know more than a small fraction of what is true. This is handled by giving the agent the power to reason defeasibily. The OSCAR system of defeasible reasoning is sketched. It is argued that if epistemic cognition must be defeasible, planning must also be done defeasibly, and the best way to do that is to reason defeasibly about plans. A sketch is given about how this might work
Abstract: Stuart Russell  describes rational agents as --œthose that do the right thing--�. The problem of designing a rational agent then becomes the problem of figuring out what the right thing is. There are two approaches to the latter problem, depending upon the kind of agent we want to build. On the one hand, anthropomorphic agents are those that can help human beings rather directly in their intellectual endeavors. These endeavors consist of decision making and data processing. An agent that can help humans in these enterprises must make decisions and draw conclusions that are rational by human standards of rationality. Anthropomorphic agents can be contrasted with goal-oriented agents --” those that can carry out certain narrowly-defined tasks in the world. Here the objective is to get the job done, and it makes little difference how the agent achieves its design goal
Abstract: The peculiarity of the relationship between philosophy and Artificial Intelligence (AI) has been evidenced since the advent of AI. This paper aims to put the basis of an extended and well founded philosophy of AI: it delineates a multi-layered general framework to which different contributions in the field may be traced back. The core point is to underline how in the same scenario both the role of philosophy on AI and role of AI on philosophy must be considered. Moreover, this framework is revised and extended in the light of the consideration of a type of multiagent system devoted to afford the issue of scientific discovery both from a conceptual and from a practical point of view
Abstract: The Centre for Applied Philosophy and Public Ethics (CAPPE) was established in 2000 as a Special Research Centre in applied philosophy funded by the Australian Research Council. It has combined the complementary strengths of two existing centres specialising in applied philosophy, namely the Centre for Philosophy and Public Issues (CPPI) at the University of Melbourne and the Centre for Professional and Applied Ethics at Charles Sturt University. It operates as a unified centre with two divisions: in Melbourne at the University of Melbourne and in Canberra at Charles Sturt University. The Director of CAPPE and the head of the Canberra node is Professor Seumas Miller. Professor C.A.J. (Tony) Coady is the Deputy Director of CAPPE and the head of the Melbourne node
Abstract: The broad range of capabilities exhibited by humans and animals is achieved through a large set of heterogeneous, tightly integrated cognitive mechanisms. To move artificial systems closer to such general-purpose intelligence we cannot avoid replicating some subset—quite possibly a substantial portion—of this large set. Progress in this direction requires that systems integration be taken more seriously as a fundamental research problem. In this paper I make the argument that intelligence must be studied holistically. I present key issues that must be addressed in the area of integration and propose solutions for speeding up rate of progress towards more powerful, integrated A.I. systems, including (a) tools for building large, complex architectures, (b) a design methodology for building realtime A.I. systems and (c) methods for facilitating code sharing at the community level
Abstract: Pioneer approaches to Artificial Intelligence have traditionally
neglected, in a chronological sequence, the agent body, the world
where the agent is situated, and the other agents. With the advent of
Collective Robotics approaches, important progresses were made toward
embodying and situating the agents, together with the introduction of
collective intelligence. However, the currently used models of social environments are still rather poor, jeopardizing the attempts of developing
truly intelligent robot teams. In this paper, we propose a roadmap for
a new approach to the design of multi-robot systems, mainly inspired
by concepts from Institutional Economics, an alternative to mainstream
neoclassical economic theory. Our approach intends to sophisticate the
design of robot collectives by adding, to the currently popular emergentist
view, the concepts of physically and socially bounded autonomy of
cognitive agents, uncoupled interaction among them and deliberately set
up coordination devices.