Javascript Menu by
MindPapers is now part of PhilPapers: online research in philosophy, a new service with many more features.
 Compiled by David Chalmers (Editor) & David Bourget (Assistant Editor), Australian National University. Submit an entry.
click here for help on how to search

6.4. Special Topics in AI (Special Topics in AI on PhilPapers)

  •  The Singularity [9]
  •  Mind Uploading [1]
  • Rhodes, Kris (ms). Vindication of the Rights of Machine.   (Google | More links)
    Abstract: In this paper, I argue that certain Machines can have rights independently of whether they are sentient, or conscious, or whatever you might call it.
    Bostrom, Nick (ms). Ethical issues in advanced artificial intelligence.   (Google | More links)
    Abstract: The ethical issues related to the possible future creation of machines with general intellectual capabilities far outstripping those of humans are quite distinct from any ethical problems arising in current automation and information systems. Such superintelligence would not be just another technological development; it would be the most important invention ever made, and would lead to explosive progress in all scientific and technological fields, as the superintelligence would conduct research with superhuman efficiency. To the extent that ethics is a cognitive pursuit, a superintelligence could also easily surpass humans in the quality of its moral thinking. However, it would be up to the designers of the superintelligence to specify its original motivations. Since the superintelligence may become unstoppably powerful because of its intellectual superiority and the technologies it could develop, it is crucial that it be provided with human-friendly motivations. This paper surveys some of the unique ethical issues in creating superintelligence, and discusses what motivations we ought to give a superintelligence, and introduces some cost-benefit considerations relating to whether the development of superintelligent machines ought to be accelerated or retarded
    Bostrom, Nick (1998). How long before superintelligence? International Journal of Futures Studies 2.   (Cited by 22 | Google)
    Abstract: _This paper outlines the case for believing that we will have superhuman artificial intelligence_ _within the first third of the next century. It looks at different estimates of the processing power of_ _the human brain; how long it will take until computer hardware achieve a similar performance;_ _ways of creating the software through bottom-up approaches like the one used by biological_ _brains; how difficult it will be for neuroscience figure out enough about how brains work to_ _make this approach work; and how fast we can expect superintelligence to be developed once_ _there is human-level artificial intelligence._
    Bostrom, Nick (2003). Taking intelligent machines seriously: Reply to critics. Futures 35 (8):901-906.   (Google | More links)
    Abstract: In an earlier paper in this journal[1], I sought to defend the claims that (1) substantial probability should be assigned to the hypothesis that machines will outsmart humans within 50 years, (2) such an event would have immense ramifications for many important areas of human concern, and that consequently (3) serious attention should be given to this scenario. Here, I will address a number of points made by several commentators
    Bostrom, Nick (ms). When machines outsmart humans.   (Google)
    Abstract: Artificial intelligence is a possibility that should not be ignored in any serious thinking about the future, and it raises many profound issues for ethics and public policy that philosophers ought to start thinking about. This article outlines the case for thinking that human-level machine intelligence might well appear within the next half century. It then explains four immediate consequences of such a development, and argues that machine intelligence would have a revolutionary impact on a wide range of the social, political, economic, commercial, technological, scientific and environmental issues that humanity will face over the coming decades
    Chalmers, David J., The singularity: A philosophical analysis.   (Google)
    Abstract: What happens when machines become more intelligent than humans? One view is that this event will be followed by an explosion to ever-greater levels of intelligence, as each generation of machines creates more intelligent machines in turn. This intelligence explosion is now often known as the “singularity”. The basic argument here was set out by the statistician I.J. Good in his 1965 article “Speculations Concerning the First Ultraintelligent Machine”: Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an “intelligence explosion”, and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make. The key idea is that a machine that is more intelligent than humans will be better than humans at designing machines. So it will be capable of designing a machine more intelligent than the most intelligent machine that humans can design. So if it is itself designed by humans, it will be capable of designing a machine more intelligent than itself. By similar reasoning, this next machine will also be capable of designing a machine more intelligent than itself. If every machine in turn does what it is capable of, we should expect a sequence of ever more intelligent machines. This intelligence explosion is sometimes combined with another idea, which we might call the “speed explosion”. The argument for a speed explosion starts from the familiar observation that computer processing speed doubles at regular intervals. Suppose that speed doubles every two years and will do so indefinitely. Now suppose that we have human-level artificial intelligence 1 designing new processors. Then faster processing will lead to faster designers and an ever-faster design cycle, leading to a limit point soon afterwards. The argument for a speed explosion was set out by the artificial intelligence researcher Ray Solomonoff in his 1985 article “The Time Scale of Artificial Intelligence”.1 Eliezer Yudkowsky gives a succinct version of the argument in his 1996 article “Staring at the Singularity”: “Computing speed doubles every two subjective years of work..
    Hall, John Storrs (forthcoming). Self-improving AI: An analysis. Minds and Machines.   (Google)
    Abstract: Self-improvement was one of the aspects of AI proposed for study in the 1956 Dartmouth conference. Turing proposed a “child machine” which could be taught in the human manner to attain adult human-level intelligence. In latter days, the contention that an AI system could be built to learn and improve itself indefinitely has acquired the label of the bootstrap fallacy. Attempts in AI to implement such a system have met with consistent failure for half a century. Technological optimists, however, have maintained that a such system is possible, producing, if implemented, a feedback loop that would lead to a rapid exponential increase in intelligence. We examine the arguments for both positions and draw some conclusions
    Hanson, Robin, Is a singularity just around the corner?   (Google)
    Abstract: Economic growth is determined by the supply and demand of investment capital; technology determines the demand for capital, while human nature determines the supply. The supply curve has two distinct parts, giving the world economy two distinct modes. In the familiar slow growth mode, rates of return are limited by human discount rates. In the fast growth mode, investment is limited by the world's wealth. Historical trends suggest that we may transition to the fast mode in roughly another century and a half
    Vinge, Vernor (online). The technological singularity.   (Cited by 43 | Google | More links)
    Yudkowsky, Eliezer (online). Staring into the singularity.   (Google)
    Abstract: 1: The End of History 2: The Beyondness of the Singularity 2.1: The Definition of Smartness 2.2: Perceptual Transcends 2.3: Great Big Numbers 2.4: Smarter Than We Are 3: Sooner Than You Think 4: Uploading 5: The Interim Meaning of Life 6: Getting to the Singularity
    Chalmers, David J., The singularity: A philosophical analysis.   (Google)
    Abstract: What happens when machines become more intelligent than humans? One view is that this event will be followed by an explosion to ever-greater levels of intelligence, as each generation of machines creates more intelligent machines in turn. This intelligence explosion is now often known as the “singularity”. The basic argument here was set out by the statistician I.J. Good in his 1965 article “Speculations Concerning the First Ultraintelligent Machine”: Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an “intelligence explosion”, and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make. The key idea is that a machine that is more intelligent than humans will be better than humans at designing machines. So it will be capable of designing a machine more intelligent than the most intelligent machine that humans can design. So if it is itself designed by humans, it will be capable of designing a machine more intelligent than itself. By similar reasoning, this next machine will also be capable of designing a machine more intelligent than itself. If every machine in turn does what it is capable of, we should expect a sequence of ever more intelligent machines. This intelligence explosion is sometimes combined with another idea, which we might call the “speed explosion”. The argument for a speed explosion starts from the familiar observation that computer processing speed doubles at regular intervals. Suppose that speed doubles every two years and will do so indefinitely. Now suppose that we have human-level artificial intelligence 1 designing new processors. Then faster processing will lead to faster designers and an ever-faster design cycle, leading to a limit point soon afterwards. The argument for a speed explosion was set out by the artificial intelligence researcher Ray Solomonoff in his 1985 article “The Time Scale of Artificial Intelligence”.1 Eliezer Yudkowsky gives a succinct version of the argument in his 1996 article “Staring at the Singularity”: “Computing speed doubles every two subjective years of work..

    6.4a Cyborgs

    6.4b Transhumanism

    6.4c Cybernetics

    Mazis, Glen (2008). Cyborg Life: The In-Between of Humans and Machines. PhaenEx 3 (2):14-36.   (Google)

    6.4d Dynamical Systems

    Abrahamsen, Adele A. & Bechtel, William P. (2006). Phenomena and mechanisms: Putting the symbolic, connectionist, and dynamical systems debate in broader perspective. In R. Stainton (ed.), Contemporary Debates in Cognitive Science. Basil Blackwell.   (Google | More links)
    Abstract: Cognitive science is, more than anything else, a pursuit of cognitive mechanisms. To make headway towards a mechanistic account of any particular cognitive phenomenon, a researcher must choose among the many architectures available to guide and constrain the account. It is thus fitting that this volume on contemporary debates in cognitive science includes two issues of architecture, each articulated in the 1980s but still unresolved:
    • Just how modular is the mind? (section 1) – a debate initially pitting encapsulated
    mechanisms (Fodorian modules that feed their ultimate outputs to a nonmodular central
    cognition) against highly interactive ones (e.g., connectionist networks that continuously
    feed streams of output to one another).
    • Does the mind process language-like representations according to formal rules? (this
    section) – a debate initially pitting symbolic architectures (such as Chomsky’s generative
    grammar or Fodor’s language of thought) against less language-like architectures (such
    as connectionist or dynamical ones).
    Our project here is to consider the second issue within the broader context of where cognitive science has been and where it is headed. The notion that cognition in general—not just language processing—involves rules operating on language-like representations actually predates cognitive science. In traditional philosophy of mind, mental life is construed as involving propositional attitudes—that is, such attitudes towards propositions as believing, fearing, and desiring that they be true—and logical inferences from them. On this view, if a person desires that a proposition be true and believes that if she performs a certain action it will become true, she will make the inference and (absent any overriding consideration) perform the action
    Bechtel, William P. (online). Dynamics and decomposition: Are they compatible?   (Cited by 4 | Google)
    Abstract: Much of cognitive neuroscience as well as traditional cognitive science is engaged in a quest for mechanisms through a project of decomposition and localization of cognitive functions. Some advocates of the emerging dynamical systems approach to cognition construe it as in opposition to the attempt to decompose and localize functions. I argue that this case is not established and rather explore how dynamical systems tools can be used to analyze and model cognitive functions without abandoning the use of decomposition and localization to understand mechanisms of cognition
    Bechtel, William P. (1998). Representations and cognitive explanations: Assessing the dynamicist challenge in cognitive science. Cognitive Science 22 (3):295-317.   (Cited by 64 | Google | More links)
    Abstract: Advocates of dynamical systems theory (DST) sometimes employ revolutionary rhetoric. In an attempt to clarify how DST models differ from others in cognitive science, I focus on two issues raised by DST: the role for representations in mental models and the conception of explanation invoked. Two features of representations are their role in standing-in for features external to the system and their format. DST advocates sometimes claim to have repudiated the need for stand-ins in DST models, but I argue that they are mistaken. Nonetheless, DST does offer new ideas as to the format of representations employed in cognitive systems. With respect to explanation, I argue that some DST models are better seen as conforming to the covering-law conception of explanation than to the mechanistic conception of explanation implicit in most cognitive science research. But even here, I argue, DST models are a valuable complement to more mechanistic cognitive explanations
    Bedau, Mark A. (1997). Emergent models of supple dynamics in life and mind. Brain and Cognition 34:5-27.   (Cited by 12 | Google | More links)
    Abstract: The dynamical patterns in mental phenomena have a characteristic suppleness&emdash;a looseness or softness that persistently resists precise formulation&emdash;which apparently underlies the frame problem of artificial intelligence. This suppleness also undermines contemporary philosophical functionalist attempts to define mental capacities. Living systems display an analogous form of supple dynamics. However, the supple dynamics of living systems have been captured in recent artificial life models, due to the emergent architecture of those models. This suggests that analogous emergent models might be able to explain supple dynamics of mental phenomena. These emergent models of the supple mind, if successful, would refashion the nature of contemporary functionalism in the philosophy of mind
    Chemero, Anthony (2000). Anti-representationalism and the dynamical stance. Philosophy of Science 67 (4):625-647.   (Cited by 12 | Google | More links)
    Abstract: Arguments in favor of anti-representationalism in cognitive science often suffer from a lack of attention to detail. The purpose of this paper is to fill in the gaps in these arguments, and in so doing show that at least one form of anti- representationalism is potentially viable. After giving a teleological definition of representation and applying it to a few models that have inspired anti- representationalist claims, I argue that anti-representationalism must be divided into two distinct theses, one ontological, one epistemological. Given the assumptions that define the debate, I give reason to think that the ontological thesis is false. I then argue that the epistemological thesis might, in the end, turn out to be true, despite a potentially serious difficulty. Along the way, there will be a brief detour to discuss a controversy from early twentieth century physics
    Chemero, Tony (2001). Dynamical explanation and mental representations. Trends in Cognitive Sciences 5 (4):141-142.   (Cited by 4 | Google | More links)
    Abstract: Markman and Dietrich1 recently recommended extending our understanding of representation to incorporate insights from some “alternative” theories of cognition: perceptual symbol systems, situated action, embodied cognition, and dynamical systems. In particular, they suggest that allowances be made for new types of representation which had been previously under-emphasized in cognitive science. The amendments they recommend are based upon the assumption that the alternative positions each agree with the classical view that cognition requires representations, internal mediating states that bear information.2 In the case of one of the alternatives, dynamical systems3, this is simply false: many dynamically-oriented cognitive scientists are anti-representationalists.4,5,6
    Chemero, Anthony & Cordeiro, William (online). Dynamical, ecological sub-persons.   (Cited by 1 | Google)
    Abstract: Scientific and Philosophical Studies of Mind Franklin and Marshall College Lancaster, PA 17604-3003 USA
    Clark, Andy (online). Commentary on "the modularity of dynamic systems".   (Google | More links)
    Abstract: 1. Throughout the paper, and especially in the section called "LISP vs. DST", I worried that there was not enough focus on EXPLANATION. For the real question, it seems to me, is not whether some dynamical system can implement human cognition, but whether the dynamical description of the system is more explanatorily potent than a computational/representational one. Thus we know, for example, that a purely physical specification can fix a system capable of computing any LISP function. But from this it doesn't follow that the physical description is the one we need to understand the power of the system considered as an information processing device. In the same way, I don't think your demonstration that bifurcating attractor sets can yield the same behavior as a LISP program goes any way towards showing that we should not PREFER the LISP description. To reduce symbolic stories to a subset of DST (as hinted in that section) requires MORE than showing this kind of equivalence: it requires showing that there is explanatory gain, or at the very least, no explanatory loss, at that level. I append an extract from a recent paper of mine that touches on these issues, in case it helps clarify what I am after here
    Clark, Andy (1998). Time and mind. Journal of Philosophy 95 (7):354-76.   (Cited by 10 | Google | More links)
    Abstract: Mind, it has recently been argued1, is a thoroughly temporal phenomenon: so temporal, indeed, as to defy description and analysis using the traditional computational tools of cognitive scientific understanding. The proper explanatory tools, so the suggestion goes, are instead the geometric constructs and differential equations of Dynamical Systems Theory. I consider various aspects of the putative temporal challenge to computational understanding, and show that the root problem turns on the presence of a certain kind of causal web: a web that involves multiple components (both inner and outer) linked by chains of continuous and reciprocal causal influence. There is, however, no compelling route from such facts about causal and temporal complexity to the radical anti- computationalist conclusion. This is because, interactive complexities notwithstanding, the computational approach provides a kind of explanatory understanding that cannot (I suggest) be recreated using the alternative resources of pure Dynamical Systems Theory. In particular, it provides a means of mapping information flow onto causal structure -- a mapping that is crucial to understanding the distinctive kinds of flexibility and control characteristic of truly mindful engagements with the world. Where we confront especially complex interactive causal webs, however, it does indeed become harder to isolate the syntactic vehicles required by the computational approach. Dynamical Systems Theory, I conclude, may play a vital role in recovering such vehicles from the burgeoning mass of real-time interactive complexity
    Cruz, Joe (online). Psychological explanation and noise in modeling. Comments on Whit Schonbein's "cognition and the power of continuous dynamical systems".   (Google)
    Abstract: I find myself ambivalent with respect to the line of argument that Schonbein offers. I certainly want to acknowledge and emphasize at the outset that Schonbein’s discussion has brought to the fore a number of central, compelling and intriguing issues regarding the nature of the dynamical approach to cognition. Though there is much that seems right in this essay, perhaps my view is that the paper invites more questions than it answers. My remarks here then are in the spirit of scouting some of the surrounding terrain in order to see just what Schonbein’s claim is and what arguments or options may be open to the dynamicist
    Eiser, J. Richard (1994). Attitudes, Chaos, and the Connectionist Mind. Cambridge: Blackwell.   (Cited by 67 | Google)
    Eliasmith, Chris (2001). Attractive and in-discrete: A critique of two putative virtues of the dynamicist theory of mind. Minds And Machines 11 (3):417-426.   (Cited by 12 | Google | More links)
    Abstract:   I argue that dynamicism does not provide a convincing alternative to currently available cognitive theories. First, I show that the attractor dynamics of dynamicist models are inadequate for accounting for high-level cognition. Second, I argue that dynamicist arguments for the rejection of computation and representation are unsound in light of recent empirical findings. This new evidence provides a basis for questioning the importance of continuity to cognitive function, challenging a central commitment of dynamicism. Coupled with a defense of current connectionist theory, these two critiques lead to the conclusion that dynamicists have failed to achieve their goal of providing a new paradigm for understanding cognition
    Eliasmith, Chris (1997). Computation and dynamical models of mind. Minds and Machines 7 (4):531-41.   (Cited by 10 | Google | More links)
    Abstract:   Van Gelder (1995) has recently spearheaded a movement to challenge the dominance of connectionist and classicist models in cognitive science. The dynamical conception of cognition is van Gelder's replacement for the computation bound paradigms provided by connectionism and classicism. He relies on the Watt governor to fulfill the role of a dynamicist Turing machine and claims that the Motivational Oscillatory Theory (MOT) provides a sound empirical basis for dynamicism. In other words, the Watt governor is to be the theoretical exemplar of the class of systems necessary for cognition and MOT is an empirical instantiation of that class. However, I shall argue that neither the Watt governor nor MOT successfully fulfill these prescribed roles. This failure, along with van Gelder's peculiar use of the concept of computation and his struggle with representationalism, prevent him from providing a convincing alternative to current cognitive theories
    Eliasmith, Chris (1998). Dynamical models and Van gelder's dynamicism. Behavioral and Brain Sciences 21 (5):639-639.   (Cited by 1 | Google | More links)
    Abstract: Van Gelder has presented a position which he ties closely to a broad class of models known as dynamical models. While supporting many of his broader claims about the importance of this class (as has been argued by connectionists for quite some time), I note that there are a number of unique characteristics of his brand of dynamicism. I suggest that these characteristics engender difficulties for his view
    Eliasmith, Chris (2003). Moving beyond metaphors: Understanding the mind for what it is. Journal of Philosophy 100 (10):493-520.   (Cited by 21 | Google | More links)
    Eliasmith, Chris (1996). The third contender: A critical examination of the dynamicist theory of cognition. [Journal (Paginated)] 9 (4):441-63.   (Cited by 79 | Google | More links)
    Abstract: In a recent series of publications, dynamicist researchers have proposed a new conception of cognitive functioning. This conception is intended to replace the currently dominant theories of connectionism and symbolicism. The dynamicist approach to cognitive modeling employs concepts developed in the mathematical field of dynamical systems theory. They claim that cognitive models should be embedded, low-dimensional, complex, described by coupled differential equations, and non-representational. In this paper I begin with a short description of the dynamicist project and its role as a cognitive theory. Subsequently, I determine the theoretical commitments of dynamicists, critically examine those commitments and discuss current examples of dynamicist models. In conclusion, I determine dynamicism's relation to symbolicism and connectionism and find that the dynamicist goal to establish a new paradigm has yet to be realized
    Foss, Jeffrey E. (1992). Introduction to the epistemology of the brain: Indeterminacy, micro-specificity, chaos, and openness. Topoi 11 (1):45-57.   (Cited by 7 | Annotation | Google | More links)
    Abstract:   Given that the mind is the brain, as materialists insist, those who would understand the mind must understand the brain. Assuming that arrays of neural firing frequencies are highly salient aspects of brain information processing (the vector functional account), four hurdles to an understanding of the brain are identified and inspected: indeterminacy, micro-specificity, chaos, and openness
    Freeman, Walter J. (1997). Nonlinear neurodynamics of intentionality. Journal of Mind and Behavior 18 (2-3):291-304.   (Cited by 9 | Google | More links)
    French, Robert M. & Thomas, Elizabeth (2001). The dynamical hypothesis in cognitive science: A review essay of Mind As Motion. Minds and Machines 11 (1):101-111.   (Google | More links)
    French, Robert M. & Thomas, Elizabeth (1998). The dynamical hypothesis: One battle behind. Behavioral and Brain Sciences 21 (5):640-641.   (Cited by 4 | Google | More links)
    Abstract: What new implications does the dynamical hypothesis have for cognitive science? The short answer is: None. The _Behavior and Brain Sciences _target article, “The dynamical hypothesis in cognitive science” by Tim Van Gelder is basically an attack on traditional symbolic AI and differs very little from prior connectionist criticisms of it. For the past ten years, the connectionist community has been well aware of the necessity of using (and understanding) dynamically evolving, recurrent network models of cognition
    Garson, James W. (1998). Chaotic emergence and the language of thought. Philosophical Psychology 11 (3):303-315.   (Cited by 7 | Google)
    Abstract: The purpose of this paper is to explore the merits of the idea that dynamical systems theory (also known as chaos theory) provides a model of the mind that can vindicate the language of thought (LOT). I investigate the nature of emergent structure in dynamical systems to assess its compatibility with causally efficacious syntactic structure in the brain. I will argue that anyone who is committed to the idea that the brain's functioning depends on emergent features of dynamical systems should have serious reservations about the LOT. First, dynamical systems theory casts doubt on one of the strongest motives for believing in the LOT: principle P, the doctrine that structure found in an effect must also be found in its cause. Second, chaotic emergence is a double-edged sword. Its tendency to cleave the psychological from the neurological undermines foundations for belief in the existence of causally efficacious representations. Overall, a dynamic conception of the brain sways us away from realist conclusions about the causal powers of representations with constituent structure
    Garson, James W. (1996). Cognition poised at the edge of chaos: A complex alternative to a symbolic mind. Philosophical Psychology 9 (3):301-22.   (Cited by 16 | Google)
    Abstract: This paper explores a line of argument against the classical paradigm in cognitive science that is based upon properties of non-linear dynamical systems, especially in their chaotic and near-chaotic behavior. Systems of this kind are capable of generating information-rich macro behavior that could be useful to cognition. I argue that a brain operating at the edge of chaos could generate high-complexity cognition in this way. If this hypothesis is correct, then the symbolic processing methodology in cognitive science faces serious obstacles. A symbolic description of the mind will be extremely difficult, and even if it is achieved to some approximation, there will still be reasons for rejecting the hypothesis that the brain is in fact a symbolic processor
    Garson, James W. (1997). Syntax in a dynamic brain. Synthese 110 (3):343-55.   (Cited by 8 | Annotation | Google | More links)
    Giunti, Marco (1996). Computers, Dynamical Systems, and the Mind. Oxford University Press.   (Google)
    Giunti, Marco (1995). Dynamic models of cognition. In T. van Gelder & Robert Port (eds.), Mind As Motion. MIT Press.   (Google)
    Globus, Gordon G. (1992). Toward a noncomputational cognitive science. Journal of Cognitive Neuroscience 4:299-310.   (Google)
    Haney, Mitchell R. (1999). Dynamical cognition, soft laws, and moral theorizing. Acta Analytica 22 (22):227-240.   (Cited by 5 | Google)
    Harcum, E. Rae (1991). Behavioral paradigm for a psychological resolution of the free will issue. Journal of Mind and Behavior 93:93-114.   (Cited by 1 | Google)
    Hooker, Cliff A. & Christensen, Wayne D. (1998). Towards a new science of the mind: Wide content and the metaphysics of organizational properties in nonlinear dynamic models. Mind and Language 13 (1):98-109.   (Cited by 5 | Google | More links)
    Horgan, Terence E. & Tienson, John L. (1994). A nonclassical framework for cognitive science. Synthese 101 (3):305-45.   (Cited by 12 | Google | More links)
    Abstract:   David Marr provided a useful framework for theorizing about cognition within classical, AI-style cognitive science, in terms of three levels of description: the levels of (i) cognitive function, (ii) algorithm and (iii) physical implementation. We generalize this framework: (i) cognitive state transitions, (ii) mathematical/functional design and (iii) physical implementation or realization. Specifying the middle, design level to be the theory of dynamical systems yields a nonclassical, alternative framework that suits (but is not committed to) connectionism. We consider how a brain's (or a network's) being a dynamical system might be the key both to its realizing various essential features of cognition — productivity, systematicity, structure-sensitive processing, syntax — and also to a non-classical solution of (frame-type) problems plaguing classical cognitive science
    Horgan, Terence E. & Tienson, John L. (1992). Cognitive systems as dynamic systems. Topoi 11 (1):27-43.   (Cited by 16 | Google | More links)
    Keijzer, Fred A. & Bem, Sacha (1996). Behavioral systems interpreted as autonomous agents and as coupled dynamical systems: A criticism. Philosophical Psychology 9 (3):323-46.   (Cited by 34 | Google)
    Abstract: Cognitive science's basic premises are under attack. In particular, its focus on internal cognitive processes is a target. Intelligence is increasingly interpreted, not as a matter of reclusive thought, but as successful agent-environment interaction. The critics claim that a major reorientation of the field is necessary. However, this will only occur when there is a distinct alternative conceptual framework to replace the old one. Whether or not a serious alternative is provided is not clear. Among the critics there is some consensus, however, that this role could be fulfilled by the concept of a 'behavioral system'. This integrates agent and environment into one encompassing general system. We will discuss two contexts in which the behavioral systems idea is being developed. Autonomous Agents Research is the enterprise of building behavior-based robots. Dynamical Systems Theory provides a mathematical framework well suited for describing the interactions between complex systems. We will conclude that both enterprises provide important contributions to the behavioral systems idea. But neither turns it into a full conceptual alternative which will initiate a major paradigm switch in cognitive science. The concept will need a lot of fleshing out before it can assume that role
    Mills, Stephen L. (1999). Noncomputable dynamical cognitivism: An eliminativist perspective. Acta Analytica 22 (22):151-168.   (Google)
    Morton, Adam (1988). The chaology of mind. Analysis 48 (June):135-142.   (Cited by 4 | Google)
    O’Brien, Gerard (1998). Digital computers versus dynamical systems: A conflation of distinctions. Behavioral and Brain Sciences 21:648-649.   (Google | More links)
    Abstract: The distinction at the heart of van Gelder’s target article is one between digital computers and dynamical systems. But this distinction conflates two more fundamental distinctions in cognitive science that should be keep apart. When this conflation is undone, it becomes apparent that the “computational hypothesis” (CH) is not as dominant in contemporary cognitive science as van Gelder contends; nor has the “dynamical hypothesis” (DH) been neglected
    Rietveld, Erik (2008). The Skillful Body as a Concernful System of Possible Actions: Phenomena and Neurodynamics. Theory & Psychology 18 (3):341-361.   (Google)
    Abstract: For Merleau-Ponty,consciousness in skillful coping is a matter of prereflective ‘I can’ and not explicit ‘I think that.’ The body unifies many domain-specific capacities. There exists a direct link between the perceived possibilities for action in the situation (‘affordances’) and the organism’s capacities. From Merleau-Ponty’s descriptions it is clear that in a flow of skillful actions, the leading ‘I can’ may change from moment to moment without explicit deliberation. How these transitions occur, however, is less clear. Given that Merleau-Ponty suggested that a better understanding of the self-organization of brain and behavior is important, I will re-read his descriptions of skillful coping in the light of recent ideas on neurodynamics. Affective processes play a crucial role in evaluating the motivational significance of objects and contribute to the individual’s prereflective responsiveness to relevant affordances.
    Robbins, Stephen E. (2002). Semantics, experience and time. Cognitive Systems Research 3 (3):301-337.   (Cited by 3 | Google | More links)
    Rockwell, Teed (2005). Attractor spaces as modules: A semi-eliminative reduction of symbolic AI to dynamic systems theory. Minds and Machines 15 (1):23-55.   (Google | More links)
    Abstract: I propose a semi-eliminative reduction of Fodors concept of module to the concept of attractor basin which is used in Cognitive Dynamic Systems Theory (DST). I show how attractor basins perform the same explanatory function as modules in several DST based research program. Attractor basins in some organic dynamic systems have even been able to perform cognitive functions which are equivalent to the If/Then/Else loop in the computer language LISP. I suggest directions for future research programs which could find similar equivalencies between organic dynamic systems and other cognitive functions. This type of research could help us discover how (and/or if) it is possible to use Dynamic Systems Theory to more accurately model the cognitive functions that are now being modeled by subroutines in Symbolic AI computer models. If such a reduction of subroutines to basins of attraction is possible, it could free AI from the limitations that prompted Fodor to say that it was impossible to model certain higher level cognitive functions
    Rockwell, Teed (online). Reply to Clark and Van gelder.   (Google | More links)
    Abstract: Clark ends his appendix with a description of what he calls "dynamic computationalism", which he describes as an interesting hybrid between DST and GOFAI. My 'horseLISP" example could be described as an example of dynamic computationalism. It is clearly not as eliminativist as Van Gelder's computational governor example, for I am trying to come up with something like identities between computational entities and dynamic ones. Thus unlike other dynamicists, I am not doing what Clark calls "embracing a different vocabulary for the understanding and analysis of brain events". I think we probably can keep much of the computational vocabulary, although the meanings of many of its terms will probably shift as much as the meaning of 'atom' has shifted since Dalton's time. The label of "dynamic computationalism" is perhaps as good a description of my position as any, but I think I would mean something slightly different by it than Clark would. (For the following, please insert the mantra "of course, this is an empirical question" (OCTEQ) every paragraph or so.)
    Rockwell, Teed (1998). The modularity of dynamic systems. Colloquia Manilana 6.   (Cited by 1 | Google | More links)
    Abstract: To some degree, Fodor's claim that Cognitive science divides the mind into modules tells us more about the minds doing the studying than the mind being studied. The knowledge game is played by analyzing an object of study into parts, and then figuring out how those parts are related to each other. This is the method regardless of whether the object being studied is a mind or a solar system. If a module is just another name for a part, then to say that the mind consists of modules is simply to say that it is comprehensible. Fodor comes close to acknowledging this in the following passage
    Schonbein, W. (2005). Cognition and the power of continuous dynamical systems. Minds and Machines 15 (1):57-71.   (Google | More links)
    Abstract: Traditional approaches to modeling cognitive systems are computational, based on utilizing the standard tools and concepts of the theory of computation. More recently, a number of philosophers have argued that cognition is too subtle or complex for these tools to handle. These philosophers propose an alternative based on dynamical systems theory. Proponents of this view characterize dynamical systems as (i) utilizing continuous rather than discrete mathematics, and, as a result, (ii) being computationally more powerful than traditional computational automata. Indeed, the logical possibility of such super-powerful systems has been demonstrated in the form of analog artificial neural networks. In this paper I consider three arguments against the nomological possibility of these automata. While the first two arguments fail, the third succeeds. In particular, the presence of noise reduces the computational power of analog networks to that of traditional computational automata, and noise is a pervasive feature of information processing in biological systems. Consequently, as an empirical thesis, the proposed dynamical alternative is under-motivated: What is required is an account of how continuously valued systems could be realized in physical systems despite the ubiquity of noise
    Sloman, Aaron (1993). The mind as a control system. In Christopher Hookway & Donald M. Peterson (eds.), Philosophy and Cognitive Science. Cambridge University Press.   (Cited by 66 | Google | More links)
    Stark, Herman E. (1999). What the dynamical cognitive scientist said to the epistemologist. Acta Analytica 22 (22):241-260.   (Google)
    Symons, John (2001). Explanation, representation and the dynamical hypothesis. Minds and Machines 11 (4):521-541.   (Cited by 1 | Google | More links)
    Abstract:   This paper challenges arguments that systematic patterns of intelligent behavior license the claim that representations must play a role in the cognitive system analogous to that played by syntactical structures in a computer program. In place of traditional computational models, I argue that research inspired by Dynamical Systems theory can support an alternative view of representations. My suggestion is that we treat linguistic and representational structures as providing complex multi-dimensional targets for the development of individual brains. This approach acknowledges the indispensability of the intentional or representational idiom in psychological explanation without locating representations in the brains of intelligent agents
    Treur, Jan (2005). States of change: Explaining dynamics by anticipatory state properties. Philosophical Psychology 18 (4):441-471.   (Cited by 4 | Google | More links)
    Abstract: In cognitive science, the dynamical systems theory (DST) has recently been advocated as an approach to cognitive modeling that is better suited to the dynamics of cognitive processes than the symbolic/computational approaches are. Often, the differences between DST and the symbolic/computational approach are emphasized. However, alternatively their commonalities can be analyzed and a unifying framework can be sought. In this paper, the possibility of such a unifying perspective on dynamics is analyzed. The analysis covers dynamics in cognitive disciplines, as well as in physics, mathematics and computer science. The unifying perspective warrants the development of integrated approaches covering both DST aspects and symbolic/computational aspects. The concept of a state-determined system, which is based on the assumption that properties of a given state fully determine the properties of future states, lies at the heart of DST. Taking this assumption as a premise, the explanatory problem of dynamics is analyzed in more detail. The analysis of four cases within different disciplines (cognitive science, physics, mathematics, computer science) shows how in history this perspective led to numerous often used concepts within them. In cognitive science, the concepts desire and intention were introduced, and in classical mechanics the concepts momentum, energy and force. Similarly, in mathematics a number of concepts have been developed to formalize the state-determined system assumption [e.g. derivatives (of different orders) of a function, Taylor approximations]. Furthermore, transition systems - a currently popular format for specification of dynamical systems within computer science - can also be interpreted from this perspective. One of the main contributions of the paper is that the case studies provide a unified view on the explanation of dynamics across the chosen disciplines. All approaches to dynamics analyzed in this paper share the state-determined system assumption and the (explicit or implicit) use of anticipatory state properties. Within cognitive science, realism is one of the problems identified for the symbolic/computational approach - i.e. how do internal states described by symbols relate to the real world in a natural manner. As DST is proposed as an alternative to the symbolic/computational approach, a natural question is whether, for DST, realism of the states can be better guaranteed. As a second main contribution, the paper provides an evaluation of DST compared to the symbolic/computational approach, which shows that, in this respect (i.e. for the realism problem), DST does not provide a better solution than the other approaches. This shows that DST and the symbolic/computational approach not only have the state-determined system assumption and the use of anticipatory state properties in common, but also the realism problem
    van Gelder, Tim (1997). Connectionism, dynamics, and the philosophy of mind. In Martin Carrier & Peter K. Machamer (eds.), Mindscapes: Philosophy, Science, and the Mind. Pittsburgh University Press.   (Cited by 3 | Google)
    van Gelder, Tim (1998). Disentangling dynamics, computation, and cognition. Behavioral and Brain Sciences 21 (5):654-661.   (Cited by 4 | Google | More links)
    Abstract: The nature of the dynamical hypothesis in cognitive science (the DH) is further clarified in responding to various criticisms and objections raised in commentaries. Major topics addressed include the definitions of “dynamical system” and “digital computer;” the DH as Law of Qualitative Structure; the DH as an ontological claim; the multiple-realizability of dynamical models; the level at which the DH is pitched; the nature of dynamics; the role of representations in dynamical cognitive science; the falsifiability of the DH; the extent to which the DH is open; the role of temporal and implementation considerations; and the novelty or importance of the DH. The basic formulation and defense of the DH in the target article survives intact, though some refinements are recommended
    van Gelder, Tim (1999). Defending the dynamic hypothesis. In Wolfgang Tschacher & J-P Dauwalder (eds.), Dynamics, Synergetics, Autonomous Agents: Nonlinear Systems Approaches to Cognitive Psychology and Cognitive Science. Singapore: World Scientific.   (Cited by 16 | Google | More links)
    Abstract: Cognitive science has always been dominated by the idea that cognition is _computational _in a rather strong and clear sense. Within the mainstream approach, cognitive agents are taken to be what are variously known as _physical symbol_ _systems, digital computers_, _syntactic engines_, or_ symbol manipulators_. Cognitive operations are taken to consist in the shuffling of symbol tokens according to strict rules (programs). Models of cognition are themselves digital computers, implemented on general purpose electronic machines. The basic mathematical framework for understanding cognition is the theory of discrete computation, and the core theoretical tools for developing and understanding models of cognition are those of computer science
    van Gelder, Tim & Port, Robert (eds.) (1995). Mind As Motion: Explorations in the Dynamics of Cognition. MIT Press.   (Cited by 30 | Google)
    Van Leeuwen, Marco (2005). Questions for the dynamicist: The use of dynamical systems theory in the philosophy of cognition. Minds and Machines 15 (3-4):271-333.   (Google)
    Abstract: The concepts and powerful mathematical tools of Dynamical Systems Theory (DST) yield illuminating methods of studying cognitive processes, and are even claimed by some to enable us to bridge the notorious explanatory gap separating mind and matter. This article includes an analysis of some of the conceptual and empirical progress Dynamical Systems Theory is claimed to accomodate. While sympathetic to the dynamicist program in principle, this article will attempt to formulate a series of problems the proponents of the approach in question will need to face if they wish to prolong their optimism. The main points to be addressed involve the reductive tendencies inherent in Dynamical Systems Theory, its somewhat muddled position relative to connectionism, the metaphorical nature DST-C exhibits which hinders its explanatory potential, and DST-C's problematic account of causality. Brief discussions of the mathematical and philosophical backgrounds of DST, seminal experimental work and possible adaptations of the theory or alternative suggestions (dynamicist connectionism, neurophenomenology, R&D theory) are included
    van Gelder, Tim (1999). Revisiting the dynamic hypothesis. Preprint 2.   (Cited by 11 | Google | More links)
    Abstract: “There is a familiar trio of reactions by scientists to a purportedly radical hypothesis: (a) “You must be our of your mind!”, (b) “What else is new? Everybody knows _that_!”, and, later—if the hypothesis is still standing—(c) “Hmm. You _might _be on to something!” ((Dennett, 1995) p. 283)
    van Gelder, Tim (1998). The dynamical hypothesis in cognitive science. Behavioral and Brain Sciences 21 (5):615-28.   (Cited by 307 | Google | More links)
    Abstract: The dynamical hypothesis is the claim that cognitive agents are dynamical systems. It stands opposed to the dominant computational hypothesis, the claim that cognitive agents are digital computers. This target article articulates the dynamical hypothesis and defends it as an open empirical alternative to the computational hypothesis. Carrying out these objectives requires extensive clarification of the conceptual terrain, with particular focus on the relation of dynamical systems to computers
    van Gelder, Tim (1995). What might cognition be if not computation? Journal of Philosophy 92 (7):345-81.   (Cited by 266 | Annotation | Google | More links)
    Weiskopf, Daniel A. (2004). The place of time in cognition. British Journal for the Philosophy of Science 55 (1):87-105.   (Cited by 2 | Google | More links)
    Abstract: models of cognition are essentially incomplete because they fail to capture the temporal properties of mental processing. I present two possible interpretations of the dynamicists' argument from time and show that neither one is successful. The disagreement between dynamicists and symbolic theorists rests not on temporal considerations per se, but on differences over the multiple realizability of cognitive states and the proper explanatory goals of psychology. The negative arguments of dynamicists against symbolic models fail, and it is doubtful whether pursuing dynamicists' explanatory goals will lead to a robust psychological theory. Introduction Elements of the symbolic theory Elements of dynamical systems theory The argument from time 4.1 First interpretation of the argument from time 4.2 Second interpretation of the argument from time Limits of dynamical systems theory
    Werning, Markus (2005). The temporal dimension of thought: Cortical foundations of predicative representation. Synthese 146 (1-2):203-224.   (Cited by 6 | Google | More links)
    Abstract: The paper argues that cognitive states of biological systems are inherently temporal. Three adequacy conditions for neuronal models of representation are vindicated: the compositionality of meaning, the compositionality of content, and the co-variation with content. Classicist and connectionist approaches are discussed and rejected. Based on recent neurobiological data, oscillatory networks are introduced as a third alternative. A mathematical description in a Hilbert space framework is developed. The states of this structure can be regarded as conceptual representations satisfying the three conditions
    Yoshimi, Jeffrey (2009). Husserl's theory of belief and the heideggerean critique. Husserl Studies 25 (2).   (Google)
    Abstract: I develop a “two-systems” interpretation of Husserl’s theory of belief. On this interpretation, Husserl accounts for our sense of the world in terms of (1) a system of embodied horizon meanings and passive synthesis, which is involved in any experience of an object, and (2) a system of active synthesis and sedimentation, which comes on line when we attend to an object’s properties. I use this account to defend Husserl against several forms of Heideggerean critique. One line of critique, recently elaborated by Taylor Carman, says that Husserl wrongly loads everyday perception with explicit beliefs about things. A second, earlier line of critique, due to Hubert Dreyfus, charges Husserl with thinking of belief on a problematic Artificial Intelligence (AI) model which involves explicit rules applied to discrete symbol structures. I argue that these criticisms are based on a conflation of Husserl’s two systems of belief. The conception of Husserlian phenomenology which emerges is compatible with Heideggerean phenomenology and associated approaches to cognitive science (in particular, dynamical systems theory)
    Yoshimi, Jeffrey (2007). Mathematizing phenomenology. Phenomenology and the Cognitive Sciences 6 (3).   (Google | More links)
    Abstract: Husserl is well known for his critique of the “mathematizing tendencies” of modern science, and is particularly emphatic that mathematics and phenomenology are distinct and in some sense incompatible. But Husserl himself uses mathematical methods in phenomenology. In the first half of the paper I give a detailed analysis of this tension, showing how those Husserlian doctrines which seem to speak against application of mathematics to phenomenology do not in fact do so. In the second half of the paper I focus on a particular example of Husserl’s “mathematized phenomenology”: his use of concepts from what is today called dynamical systems theory

    6.4e The Nature of AI

    Buchanan, Bruce G. (1988). AI as an experimental science. In James H. Fetzer (ed.), Aspects of AI. Kluwer.   (Google)
    Bundy, A. (1990). What kind of field is AI? In Derek Partridge & Y. Wilks (eds.), The Foundations of Artificial Intelligence: A Sourcebook. Cambridge University Press.   (Cited by 6 | Google)
    Dennett, Daniel C. (1978). AI as philosophy and as psychology. In Martin Ringle (ed.), Philosophical Perspectives on Artificial Intelligence. Humanities Press.   (Annotation | Google)
    Glymour, C. (1988). AI is philosophy. In James H. Fetzer (ed.), Aspects of AI. D.   (Cited by 1 | Google)
    Harre, Rom (1990). Vigotsky and artificial intelligence: What could cognitive psychology possibly be about? Midwest Studies in Philosophy 15:389-399.   (Google)
    Kukla, André (1989). Is AI an empirical science? Analysis 49 (March):56-60.   (Cited by 4 | Annotation | Google)
    Kukla, André (1994). Medium AI and experimental science. Philosophical Psychology 7 (4):493-5012.   (Cited by 4 | Annotation | Google)
    Abstract: It has been claimed that a great deal of AI research is an attempt to discover the empirical laws describing a new type of entity in the world—the artificial computing system. I call this enterprise 'medium AI', since it is in some respects stronger than Searle's 'weak AI', and in other respects weaker than 'strong AI'. Bruce Buchanan, among others, conceives of medium AI as an empirical science entirely on a par with psychology or chemistry. I argue that medium AI is not an empirical science at all. Depending on how artificial computing systems are categorized, it is either an a priori science like mathematics, or a branch of engineering
    McCarthy, John (online). What is artificial intelligence?   (Cited by 38 | Google | More links)
    Minsky, Marvin L. (online). From pain to suffering.   (Google)
    Abstract: “Great pain urges all animals, and has urged them during endless generations, to make the most violent and diversified efforts to escape from the cause of suffering. Even when a limb or other separate part of the body is hurt, we often see a tendency to shake it, as if to shake off the cause, though this may obviously be impossible.” —Charles Darwin[1]
    Nakashima, H. (1999). AI as complex information processing. Minds and Machines 9 (1):57-80.   (Cited by 2 | Google | More links)
    Abstract:   In this article, I present a software architecture for intelligent agents. The essence of AI is complex information processing. It is impossible, in principle, to process complex information as a whole. We need some partial processing strategy that is still somehow connected to the whole. We also need flexible processing that can adapt to changes in the environment. One of the candidates for both of these is situated reasoning, which makes use of the fact that an agent is in a situation, so it only processes some of the information – the part that is relevant to that situation. The combination of situated reasoning and context reflection leads to the idea of organic programming, which introduces a new building block of programs called a cell. Cells contain situated programs and the combination of cells is controlled by those programs
    Sloman, Aaron (2002). The irrelevance of Turing machines to AI. In Matthias Scheutz (ed.), Computationalism: New Directions. MIT Press.   (Cited by 9 | Google | More links)
    Sufka, Kenneth J. & Polger, Thomas W. (2005). Closing the gap on pain. In Murat Aydede (ed.), Pain: New Essays on its Nature and the Methodology of its Study. MIT Press.   (Google | More links)
    Abstract: A widely accepted theory holds that emotional experiences occur mainly in a part of the human brain called the amygdala. A different theory asserts that color sensation is located in a small subpart of the visual cortex called V4. If these theories are correct, or even approximately correct, then they are remarkable advances toward a scientific explanation of human conscious experience. Yet even understanding the claims of such theories—much less evaluating them—raises some puzzles. Conscious experience does not present itself as a brain process. Indeed experience seems entirely unlike neural activity. For example, to some people it seems that an exact physical duplicate of you could have different sensations than you do, or could have no sensations at all. If so, then how is it even possible that sensations could turn out to be brain processes?
    Yudkowsky, Eliezer (online). General intelligence and seed AI.   (Google)

    6.4f The Frame Problem

    Anselme, Patrick & French, Robert M. (1999). Interactively converging on context-sensitive representations: A solution to the frame problem. Revue Internationale de Philosophie 53 (209):365-385.   (Google)
    Abstract: While we agree that the frame problem, as initially stated by McCarthy and Hayes (1969), is a problem that arises because of the use of representations, we do not accept the anti-representationalist position that the way around the problem is to eliminate representations. We believe that internal representations of the external world are a necessary, perhaps even a defining feature, of higher cognition. We explore the notion of dynamically created context-dependent representations that emerge from a continual interaction between working memory, external input, and long-term memory. We claim that only this kind of representation, necessary for higher cognitive abilities such as counterfactualization, will allow the combinatorial explosion inherent in the frame problem to be avoided
    Clark, Andy (2002). Global abductive inference and authoritative sources, or, how search engines can save cognitive science. Cognitive Science Quarterly 2 (2):115-140.   (Cited by 2 | Google | More links)
    Abstract: Kleinberg (1999) describes a novel procedure for efficient search in a dense hyper-linked environment, such as the world wide web. The procedure exploits information implicit in the links between pages so as to identify patterns of connectivity indicative of “authorative sources”. At a more general level, the trick is to use this second-order link-structure information to rapidly and cheaply identify the knowledge- structures most likely to be relevant given a specific input. I shall argue that Kleinberg’s procedure is suggestive of a new, viable, and neuroscientifically plausible solution to at least (one incarnation of) the so-called “Frame Problem” in cognitive science viz the problem of explaining global abductive inference. More accurately, I shall argue that
    Crockett, L. (1994). The Turing Test and the Frame Problem: AI's Mistaken Understanding of Intelligence. Ablex.   (Cited by 19 | Google)
    Abstract: I have discussed the frame problem and the Turing test at length, but I have not attempted to spell out what I think the implications of the frame problem ...
    Dennett, Daniel C. (1984). Cognitive wheels: The frame problem of AI. In C. Hookway (ed.), Minds, Machines and Evolution. Cambridge University Press.   (Cited by 139 | Annotation | Google)
    Dreyfus, Hubert L. & Dreyfus, Stuart E. (1987). How to stop worrying about the frame problem even though it's computationally insoluble. In Zenon W. Pylyshyn (ed.), The Robot's Dilemma. Ablex.   (Annotation | Google)
    Fetzer, James H. (1990). The frame problem: Artificial intelligence meets David Hume. International Journal of Expert Systems 3:219-232.   (Cited by 13 | Google | More links)
    Fodor, Jerry A. (1987). Modules, frames, fridgeons, sleeping dogs, and the music of the spheres. In Zenon W. Pylyshyn (ed.), The Robot's Dilemma. Ablex.   (Cited by 56 | Google)
    Fodor, Jerry A. (1989). Modules, frames, fridgeons, sleeping dogs. In Modularity in Knowledge Representation and Natural-Language Understanding. Cambridge: MIT Press.   (Google)
    Fodor, Jerry A. (1987). Modules, frames, fridgeons. In Modularity In Knowledge Representation And Natural-Language Understanding. Cambridge: Mit Press.   (Google)
    Haselager, W. F. G. & Van Rappard, J. F. H. (1998). Connectionism, systematicity, and the frame problem. Minds and Machines 8 (2):161-179.   (Cited by 11 | Google | More links)
    Abstract:   This paper investigates connectionism's potential to solve the frame problem. The frame problem arises in the context of modelling the human ability to see the relevant consequences of events in a situation. It has been claimed to be unsolvable for classical cognitive science, but easily manageable for connectionism. We will focus on a representational approach to the frame problem which advocates the use of intrinsic representations. We argue that although connectionism's distributed representations may look promising from this perspective, doubts can be raised about the potential of distributed representations to allow large amounts of complexly structured information to be adequately encoded and processed. It is questionable whether connectionist models that are claimed to effectively represent structured information can be scaled up to a realistic extent. We conclude that the frame problem provides a difficulty to connectionism that is no less serious than the obstacle it constitutes for classical cognitive science
    Haugeland, John (1987). An overview of the frame problem. In Zenon W. Pylyshyn (ed.), The Robot's Dilemma. Ablex.   (Cited by 17 | Annotation | Google)
    Hayes, Patrick (1987). What the frame problem is and isn't. In Zenon W. Pylyshyn (ed.), The Robot's Dilemma. Ablex.   (Cited by 25 | Annotation | Google)
    Hendricks, Scott (2006). The frame problem and theories of belief. Philosophical Studies 129 (2):317-33.   (Google | More links)
    Abstract: The frame problem is the problem of how we selectively apply relevant knowledge to particular situations in order to generate practical solutions. Some philosophers have thought that the frame problem can be used to rule out, or argue in favor of, a particular theory of belief states. But this is a mistake. Sentential theories of belief are no better or worse off with respect to the frame problem than are alternative theories of belief, most notably, the “map” theory of belief
    Horgan, Terry & Timmons, Mark (2009). What does the frame problem tell us about moral normativity? Ethical Theory and Moral Practice 12 (1).   (Google)
    Abstract: Within cognitive science, mental processing is often construed as computation over mental representations—i.e., as the manipulation and transformation of mental representations in accordance with rules of the kind expressible in the form of a computer program. This foundational approach has encountered a long-standing, persistently recalcitrant, problem often called the frame problem; it is sometimes called the relevance problem. In this paper we describe the frame problem and certain of its apparent morals concerning human cognition, and we argue that these morals have significant import regarding both the nature of moral normativity and the human capacity for mastering moral normativity. The morals of the frame problem bode well, we argue, for the claim that moral normativity is not fully systematizable by exceptionless general principles, and for the correlative claim that such systematizability is not required in order for humans to master moral normativity
    Janlert, Lars-Erik (1987). Modeling change: The frame problem. In Zenon W. Pylyshyn (ed.), The Robot's Dilemma. Ablex.   (Cited by 23 | Google)
    Korb, Kevin B. (1998). The frame problem: An AI fairy tale. Minds and Machines 8 (3):317-351.   (Cited by 1 | Google | More links)
    Abstract:   I analyze the frame problem and its relation to other epistemological problems for artificial intelligence, such as the problem of induction, the qualification problem and the "general" AI problem. I dispute the claim that extensions to logic (default logic and circumscriptive logic) will ever offer a viable way out of the problem. In the discussion it will become clear that the original frame problem is really a fairy tale: as originally presented, and as tools for its solution are circumscribed by Pat Hayes, the problem is entertaining, but incapable of resolution. The solution to the frame problem becomes available, and even apparent, when we remove artificial restrictions on its treatment and understand the interrelation between the frame problem and the many other problems for artificial epistemology. I present the solution to the frame problem: an adequate theory and method for the machine induction of causal structure. Whereas this solution is clearly satisfactory in principle, and in practice real progress has been made in recent years in its application, its ultimate implementation is in prospect only for future generations of AI researchers
    Lormand, Eric (1990). Framing the frame problem. Synthese 82 (3):353-74.   (Cited by 9 | Annotation | Google | More links)
    Abstract:   The frame problem is widely reputed among philosophers to be one of the deepest and most difficult problems of cognitive science. This paper discusses three recent attempts to display this problem: Dennett's problem of ignoring obviously irrelevant knowledge, Haugeland's problem of efficiently keeping track of salient side effects, and Fodor's problem of avoiding the use of kooky concepts. In a negative vein, it is argued that these problems bear nothing but a superficial similarity to the frame problem of AI, so that they do not provide reasons to disparage standard attempts to solve it. More positively, it is argued that these problems are easily solved by slight variations on familiar AI themes. Finally, some discussion is devoted to more difficult problems confronting AI
    Lormand, Eric (1998). The frame problem. In Robert A. Wilson & Frank F. Keil (eds.), MIT Encyclopedia of the Cognitive Sciences (MITECS). MIT Press.   (Google)
    Abstract: From its humble origins labeling a technical annoyance for a particular AI formalism, the term "frame problem" has grown to cover issues confronting broader research programs in AI. In philosophy, the term has come to encompass allegedly fundamental, but merely superficially related, objections to computational models of mind in AI and beyond
    Lormand, Eric (1994). The holorobophobe's dilemma. In Kenneth M. Ford & Z. Pylylshyn (eds.), The Robot's Dilemma Revisited. Ablex.   (Cited by 2 | Google | More links)
    Abstract: Much research in AI (and cognitive science, more broadly) proceeds on the assumption that there is a difference between being well-informed and being smart. Being well-informed has to do, roughly, with the content of one’s representations--with their truth and the range of subjects they cover. Being smart, on the other hand, has to do with one’s ability to process these representations and with packaging them in a form that allows them to be processed efficiently. The main theoretical concern of artificial intelligence research is to solve "process-and-form" problems: problems with finding processes and representational formats that enable us to understand how a computer could be smart
    Maloney, J. Christopher (1988). In praise of narrow minds. In James H. Fetzer (ed.), Aspects of AI. D.   (Google)
    McCarthy, John & Hayes, Patrick (1969). Some philosophical problems from the standpoint of artificial intelligence. In B. Meltzer & Donald Michie (eds.), Machine Intelligence 4. Edinburgh University Press.   (Cited by 1919 | Google | More links)
    McDermott, Drew (1987). We've been framed: Or, why AI is innocent of the frame problem. In Zenon W. Pylyshyn (ed.), The Robot's Dilemma. Ablex.   (Cited by 15 | Annotation | Google)
    Murphy, Dominic (2001). Folk psychology meets the frame problem. Studies in History and Philosophy of Modern Physics 32 (3):565-573.   (Google)
    Murphy, D. (2001). Folk psychology meets the frame problem - W. F. G. Haselager, cognitive science and folk psychology (london: Sage publications, 1997), X + 165 pp. ISBN 0-761-95425-2 hardback £55.00; ISBN 0-761-95426-0 paperback £17.99. Studies in History and Philosophy of Science Part C 32 (3):565-573.   (Google)
    Pollock, John L. (1997). Reasoning about change and persistence: A solution to the frame problem. Noûs 31 (2):143-169.   (Cited by 4 | Google | More links)
    Pylyshyn, Zenon (1996). The frame problem blues. Once more, with feeling. In K. M. Ford & Z. W. Pylyshyn (eds.), The Robot's Dilemma Revisited: The Frame Problem in Artificial Intelligence. Ablex.   (Cited by 2 | Google | More links)
    Abstract: For many of the authors in this volume, this is the second attempt to explore what McCarthy and Hayes (1969) first called the “Frame Problem”. Since the first compendium (Pylyshyn, 1987), nicely summarized here by Ronald Loui, there have been several conferences and books on the topic. Their goals range from providing a clarification of the problem by breaking it down into subproblems (and sometimes declaring the hard subproblems to not be the_ real_ Frame Problem), to providing formal “solutions” to certain aspects of the problem. But more often the message has been that the problem is not solvable except in a piecemeal way in special circumstances by some sort of heuristic approximations. It has sometimes also been said that solving the Frame Problem is not only an unachievable goal, but it is also an unnecessary one since_ humans_ do not solve it either; we simply get along as best we can and deal with the problem of planning in ways that, to use Dennett’s phrase, is “good enough for government work”
    Pylyshyn, Zenon W. (ed.) (1987). The Robot's Dilemma. Ablex.   (Cited by 148 | Annotation | Google | More links)
    Shanahan, Murray & Baars, Bernard J. (2005). Applying global workspace theory to the frame problem. Cognition 98 (2):157-176.   (Cited by 28 | Google | More links)
    Shanahan, Murray (online). The frame problem. Stanford Encyclopedia of Philosophy.   (Google)
    Sperber, Dan & Wilson, Deirdre (1996). Fodor's frame problem and relevance theory (reply to chiappe & kukla). [Journal (Paginated)].   (Google | More links)
    Abstract: Chiappe and Kukla argue that relevance theory fails to solve the frame problem as defined by Fodor. They are right. They are wrong, however, to take Fodor’s frame problem too seriously. Fodor’s concerns, on the other hand, even though they are wrongly framed, are worth addressing. We argue that Relevance thoery helps address them
    Sprevak, Mark, The frame problem and the treatment of prediction.   (Google)
    Abstract: The frame problem is a problem in artificial intelligence that a number of philosophers have claimed has philosophical relevance. The structure of this paper is as follows: (1) An account of the frame problem is given; (2) The frame problem is distinguished from related problems; (3) The main strategies for dealing with the frame problem are outlined; (4) A difference between commonsense reasoning and prediction using a scientific theory is argued for; (5) Some implications for the..
    Waskan, Jonathan A. (2000). A virtual solution to the frame problem. Proceedings of the First IEEE-RAS International Conference on Humanoid Robots.   (Cited by 1 | Google)
    Abstract: We humans often respond effectively when faced with novel circumstances. This is because we are able to predict how particular alterations to the world will play out. Philosophers, psychologists, and computational modelers have long favored an account of this process that takes its inspiration from the truth-preserving powers of formal deduction techniques. There is, however, an alternative hypothesis that is better able to account for the human capacity to predict the consequences worldly alterations. This alternative takes its inspiration from the powers of truth preservation exhibited by scale models and leads to a determinate computational solution to the frame problem
    Wheeler, Michael (2008). Cognition in context: Phenomenology, situated robotics and the frame problem. International Journal of Philosophical Studies 16 (3):323 – 349.   (Google | More links)
    Abstract: The frame problem is the difficulty of explaining how non-magical systems think and act in ways that are adaptively sensitive to context-dependent relevance. Influenced centrally by Heideggerian phenomenology, Hubert Dreyfus has argued that the frame problem is, in part, a consequence of the assumption (made by mainstream cognitive science and artificial intelligence) that intelligent behaviour is representation-guided behaviour. Dreyfus' Heideggerian analysis suggests that the frame problem dissolves if we reject representationalism about intelligence and recognize that human agents realize the property of thrownness (the property of being always already embedded in a context). I argue that this positive proposal is incomplete until we understand exactly how the properties in question may be instantiated in machines like us. So, working within a broadly Heideggerian conceptual framework, I pursue the character of a representation-shunning thrown machine. As part of this analysis, I suggest that the frame problem is, in truth, a two-headed beast. The intra-context frame problem challenges us to say how a purely mechanistic system may achieve appropriate, flexible and fluid action within a context. The inter-context frame problem challenges us to say how a purely mechanistic system may achieve appropriate, flexible and fluid action in worlds in which adaptation to new contexts is open-ended and in which the number of potential contexts is indeterminate. Drawing on the field of situated robotics, I suggest that the intra-context frame problem may be neutralized by systems of special-purpose adaptive couplings, while the inter-context frame problem may be neutralized by systems that exhibit the phenomenon of continuous reciprocal causation. I also defend the view that while continuous reciprocal causation is in conflict with representational explanation, special-purpose adaptive coupling, as well as its associated agential phenomenology, may feature representations. My proposal has been criticized recently by Dreyfus, who accuses me of propagating a cognitivist misreading of Heidegger, one that, because it maintains a role for representation, leads me seriously astray in my handling of the frame problem. I close by responding to Dreyfus' concerns
    Wilkerson, William S. (2001). Simulation, theory, and the frame problem: The interpretive moment. Philosophical Psychology 14 (2):141-153.   (Cited by 5 | Google | More links)
    Abstract: The theory-theory claims that the explanation and prediction of behavior works via the application of a theory, while the simulation theory claims that explanation works by putting ourselves in others' places and noting what we would do. On either account, in order to develop a prediction or explanation of another person's behavior, one first needs to have a characterization of that person's current or recent actions. Simulation requires that I have some grasp of the other person's behavior to project myself upon; whereas theorizing requires a subject matter to theorize about. The frame problem shows that multiple, true characterizations are possible for any behavior or situation. However, only one or a few of these characterizations are relevant to explaining or predicting behavior. Since different characterizations of a behavior lead to different predictions or explanations, much of the work of interpersonal interpretation is done in the process of finding this characterization - that is, prior to either theorizing or simulating. Moreover, finding this characterization involves extensive knowledge of the physical, cultural, and social worlds of the persons involved

    6.4g AI Methodology

    Bickhard, Mark H. (2000). Motivation and Emotion: An Interactive Process Model. In Ralph D. Ellis & Natika Newton (eds.), The Caldron of Consciousness: Motivation, Affect and Self-Organization. John Benjamins.   (Cited by 19 | Google | More links)
    Abstract: In this chapter, I outline dynamic models of motivation and emotion. These turn out not to be autonomous subsystems, but, instead, are deeply integrated in the basic interactive dynamic character of living systems. Motivation is a crucial aspect of particular kinds of interactive systems -- systems for which representation is a sister aspect. Emotion is a special kind of partially reflective interaction process, and yields its own emergent motivational aspects. In addition, the overall model accounts for some of the crucial properties of consciousness
    Birnbaum, L. (1991). Rigor mortis: A response to Nilsson's 'logic and artificial intelligence'. Artificial Intelligence 47:57-78.   (Cited by 106 | Google | More links)
    Chalmers, David J.; French, Robert M. & Hofstadter, Douglas R. (1992). High-level perception, representation, and analogy: A critique of AI methodology. Journal of Experimental and Theoretical Artificial Intelligence.   (Cited by 1 | Annotation | Google)
    Chalmers, David J.; French, Robert M. & Hofstadter, Douglas R. (1992). High-level perception, representation, and analogy:A critique of artificial intelligence methodology. Journal of Experimental and Theoretical Artificial Intellige 4 (3):185 - 211.   (Cited by 123 | Google | More links)
    Abstract: High-level perception--”the process of making sense of complex data at an abstract, conceptual level--”is fundamental to human cognition. Through high-level perception, chaotic environmen- tal stimuli are organized into the mental representations that are used throughout cognitive pro- cessing. Much work in traditional artificial intelligence has ignored the process of high-level perception, by starting with hand-coded representations. In this paper, we argue that this dis- missal of perceptual processes leads to distorted models of human cognition. We examine some existing artificial-intelligence models--”notably BACON, a model of scientific discovery, and the Structure-Mapping Engine, a model of analogical thought--”and argue that these are flawed pre- cisely because they downplay the role of high-level perception. Further, we argue that perceptu- al processes cannot be separated from other cognitive processes even in principle, and therefore that traditional artificial-intelligence models cannot be defended by supposing the existence of a --œrepresentation module--� that supplies representations ready-made. Finally, we describe a model of high-level perception and analogical thought in which perceptual processing is integrated with analogical mapping, leading to the flexible build-up of representations appropriate to a given context
    Clark, Andy (1986). A biological metaphor. Mind and Language 1:45-64.   (Cited by 6 | Annotation | Google | More links)
    Clark, Andy (1987). The kludge in the machine. Mind and Language 2:277-300.   (Cited by 12 | Google | More links)
    Colombetti, Giovanna (2007). Enactive appraisal. Phenomenology and the Cognitive Sciences.   (Cited by 4 | Google | More links)
    Abstract: Emotion theorists tend to separate “arousal” and other bodily events such as “actions” from the evaluative component of emotion known as “appraisal.” This separation, I argue, implies phenomenologically implausible accounts of emotion elicitation and personhood. As an alternative, I attempt a reconceptualization of the notion of appraisal within the so-called “enactive approach.” I argue that appraisal is constituted by arousal and action, and I show how this view relates to an embodied and affective notion of personhood
    Colombetti, Giovanna & Thompson, Evan (forthcoming). The feeling body: Towards an enactive approach to emotion. In W. F. Overton, U. Mueller & J. Newman (eds.), Body in Mind, Mind in Body: Developmental Perspectives on Embodiment and Consciousness. Erlbaum.   (Cited by 3 | Google | More links)
    Abstract: For many years emotion theory has been characterized by a dichotomy between the head and the body. In the golden years of cognitivism, during the nineteen-sixties and seventies, emotion theory focused on the cognitive antecedents of emotion, the so-called “appraisal processes.” Bodily events were seen largely as byproducts of cognition, and as too unspecific to contribute to the variety of emotion experience. Cognition was conceptualized as an abstract, intellectual, “heady” process separate from bodily events. Although current emotion theory has moved beyond this disembodied stance by conceiving of emotions as involving both cognitive processes (perception, attention, and evaluation) and bodily events (arousal, behavior, and facial expressions), the legacy of cognitivism persists in the tendency to treat cognitive and bodily events as separate constituents of emotion. Thus the cognitive aspects of emotion are supposedly distinct and separate from the bodily ones. This separation indicates that cognitivism’s disembodied conception of cognition continues to shape the way emotion theorists conceptualize emotion
    Dascal, M. (1992). Why does language matter to artificial intelligence? Minds and Machines 2 (2):145-174.   (Cited by 7 | Google | More links)
    Abstract:   Artificial intelligence, conceived either as an attempt to provide models of human cognition or as the development of programs able to perform intelligent tasks, is primarily interested in theuses of language. It should be concerned, therefore, withpragmatics. But its concern with pragmatics should not be restricted to the narrow, traditional conception of pragmatics as the theory of communication (or of the social uses of language). In addition to that, AI should take into account also the mental uses of language (in reasoning, for example) and the existential dimensions of language as a determiner of the world we (and our computers) live in. In this paper, the relevance of these three branches of pragmatics-sociopragmatics, psychopragmatics, and ontopragmatics-for AI are explored
    Dietrich, Eric (1994). AI and the tyranny of Galen, or why evolutionary psychology and cognitive ethology are important to artificial intelligence. Journal of Experimental And Theoretical Artificial Intelligence 6 (4):325-330.   (Google | More links)
    Abstract: Concern over the nature of AI is, for the tastes many AI scientists, probably overdone. In this they are like all other scientists. Working scientists worry about experiments, data, and theories, not foundational issues such as what their work is really about or whether their discipline is methodologically healthy. However, most scientists aren’t in a field that is approximately fifty years old. Even relatively new fields such as nonlinear dynamics or branches of biochemistry are in fact advances in older established sciences and are therefore much more settled. Of course, by stretching things, AI can be said to have a history reaching back t o Charles Babbage, and possibly back beyond that to Leibnitz. However, all of that is best viewed as prelude. AI’s history is punctuated with the invention of the computer (and, if one wants t o stretch our history back to the 1930s, the development of the notion of computation by Turing, Church, and others). Hence, AI really began (or began in earnest) sometime in the late 1940s or early 1950s (some mark the conference a t Dartmouth in the summer of 1957 as the moment of our birth). And since those years we simply have not had time to settle into a routine science attacking reasonably well understood questions (for example, many of the questions some of us regard as supreme are regarded by others as inconsequential or mere excursions)
    Dreyfus, Hubert L. (1981). From micro-worlds to knowledge: AI at an impasse. In J. Haugel (ed.), Mind Design. MIT Press.   (Annotation | Google)
    Dreyfus, Hubert L. & Dreyfus, Stuart E. (1988). Making a mind versus modeling the brain: AI at a crossroads. Daedalus.   (Cited by 6 | Annotation | Google)
    Dreyfus, Hubert L. (2007). Why Heideggerian ai failed and how fixing it would require making it more Heideggerian. Philosophical Psychology 20 (2):247 – 268.   (Cited by 2 | Google | More links)
    Elster, Jon (1996). Rationality and the emotions. Economic Journal 106:1386-97.   (Cited by 63 | Google | More links)
    Abstract: In an earlier paper (Elster, 1989 a), I discussed the relation between rationality and social norms. Although I did mention the role of the emotions in sustaining social norms, I did not focus explicitly on the relation between rationality and the emotions. That relation is the main topic of the present paper, with social norms in a subsidiary part
    Flach, P. A. (ed.) (1991). Future Directions in Artificial Intelligence. New York: Elsevier Science.   (Cited by 2 | Google)
    Fulda, Joseph S. (2006). A Plea for Automated Language-to-Logical-Form Converters. RASK: Internationalt tidsskrift for sprog og kommuinkation 24 (--):87-102.   (Google)
    Griffiths, Paul E. & Scarantino, Andrea (2005). Emotions in the Wild: The Situated Perspective on Emotion. In P. Robbins & Murat Aydede (eds.), The Cambridge Handbook of Situated Cognition. Cambridge University Press.   (Cited by 2 | Google | More links)
    Abstract: Paul E Griffiths Biohumanities Project University of Queensland St Lucia 4072 Australia
    Hadley, Robert F. (1991). The many uses of 'belief' in AI. Minds and Machines 1 (1):55-74.   (Cited by 2 | Annotation | Google | More links)
    Abstract:   Within AI and the cognitively related disciplines, there exist a multiplicity of uses of belief. On the face of it, these differing uses reflect differing views about the nature of an objective phenomenon called belief. In this paper I distinguish six distinct ways in which belief is used in AI. I shall argue that not all these uses reflect a difference of opinion about an objective feature of reality. Rather, in some cases, the differing uses reflect differing concerns with special AI applications. In other cases, however, genuine differences exist about the nature of what we pre-theoretically call belief. To an extent the multiplicity of opinions about, and uses of belief, echoes the discrepant motivations of AI researchers. The relevance of this discussion for cognitive scientists and philosophers arises from the fact that (a) many regard theoretical research within AI as a branch of cognitive science, and (b) even if theoretical AI is not cognitive science, trends within AI influence theories developed within cognitive science. It should be beneficial, therefore, to unravel the distinct uses and motivations surrounding belief, in order to discover which usages merely reflect differing pragmatic concerns, and which usages genuinely reflect divergent views about reality
    Haugeland, John (1979). Understanding natural language. Journal of Philosophy 76 (November):619-32.   (Cited by 12 | Annotation | Google | More links)
    Kirsh, David (1991). Foundations of AI: The big issues. Artificial Intelligence 47:3-30.   (Cited by 46 | Annotation | Google | More links)
    Abstract: The objective of research in the foundations of Al is to explore such basic questions as: What is a theory in Al? What are the most abstract assumptions underlying the competing visions of intelligence? What are the basic arguments for and against each assumption? In this essay I discuss five foundational issues: (1) Core Al is the study of conceptualization and should begin with knowledge level theories. (2) Cognition can be studied as a disembodied process without solving the symbol grounding problem. (3) Cognition is nicely described in propositional terms. (4) We can study cognition separately from learning. (5) There is a single architecture underlying virtually all cognition. I explain what each of these implies and present arguments from both outside and inside Al why each has been seen as right or wrong.
    Kobsa, Alfred (1987). What is explained by AI models. In Artificial Intelligence. St Martin's Press.   (Cited by 2 | Google)
    Labuschagne, Willem A. & Heidema, Johannes (2005). Natural and artificial cognition: On the proper place of reason. South African Journal of Philosophy 24 (2):137-149.   (Cited by 1 | Google | More links)
    Marr, David (1977). Artificial intelligence: A personal view. Artificial Intelligence 9 (September):37-48.   (Cited by 131 | Annotation | Google | More links)
    McDermott, Drew (1987). A critique of pure reason. Computational Intelligence 3:151-60.   (Cited by 141 | Annotation | Google | More links)
    McDermott, Drew (1981). Artificial intelligence meets natural stupidity. In J. Haugel (ed.), Mind Design. MIT Press.   (Cited by 99 | Annotation | Google)
    Nilsson, Neil (1991). Logic and artificial intelligence. Artificial Intelligence 47:31-56.   (Cited by 123 | Google | More links)
    Partridge, Derek & Wilks, Y. (eds.) (1990). The Foundations of Artificial Intelligence: A Sourcebook. Cambridge University Press.   (Cited by 19 | Annotation | Google | More links)
    Abstract: This outstanding collection is designed to address the fundamental issues and principles underlying the task of Artificial Intelligence.
    Petersen, Stephen (2004). Functions, creatures, learning, emotion. Hudlicka and Canamero.   (Cited by 2 | Google)
    Abstract: I propose a conceptual framework for emotions according to which they are best understood as the feedback mechanism a creature possesses in virtue of its function to learn. More specifically, emotions can be neatly modeled as a measure of harmony in a certain kind of constraint satisfaction problem. This measure can be used as error for weight adjustment (learning) in an unsupervised connectionist network.
    Preston, Beth (1993). Heidegger and artificial intelligence. Philosophy and Phenomenological Research 53 (1):43-69.   (Cited by 4 | Annotation | Google | More links)
    Pylyshyn, Zenon W. (1979). Complexity and the study of artificial and human intelligence. In Martin Ringle (ed.), Philosophical Perspectives in Artificial Intelligence. Humanities Press.   (Cited by 15 | Google)
    Ringle, Martin (ed.) (1979). Philosophical Perspectives in Artificial Intelligence. Humanities Press.   (Cited by 5 | Annotation | Google)
    Robinson, William S. (1991). Rationalism, expertise, and the dreyfuses' critique of AI research. Southern Journal of Philosophy 29:271-90.   (Annotation | Google)
    Shaffer, Michael J. (2009). Decision theory, intelligent planning and counterfactuals. Minds and Machines 19 (1):61-92.   (Google)
    Abstract: The ontology of decision theory has been subject to considerable debate in the past, and discussion of just how we ought to view decision problems has revealed more than one interesting problem, as well as suggested some novel modifications of classical decision theory. In this paper it will be argued that Bayesian, or evidential, decision-theoretic characterizations of decision situations fail to adequately account for knowledge concerning the causal connections between acts, states, and outcomes in decision situations, and so they are incomplete. Second, it will be argues that when we attempt to incorporate the knowledge of such causal connections into Bayesian decision theory, a substantial technical problem arises for which there is no currently available solution that does not suffer from some damning objection or other. From a broader perspective, this then throws into question the use of decision theory as a model of human or machine planning
    Sticklen, J. (1989). Problem-solving architectures at the knowledge level. Journal of Experimental and Theoretical Artificial Intelligence 1:233-247.   (Cited by 19 | Google | More links)
    Stone, Matthew, Agents in the real world.   (Google)
    Abstract: The mid-twentieth century saw the introduction of a new general model of processes, COMPUTATION, with the work of scientists such as Turing, Chomsky, Newell and Simon.1 This model so revolutionized the intellectual world that the dominant scientific programs of the day—spearheaded by such eminent scientists as Hilbert, Bloomfield and Skinner—are today remembered as much for the way computation exposed their stark limitations as for their positive contributions.2 Ever since, the field of Artificial Intelligence (AI) has defined itself as the subfield of computer science dedicated to the understanding of intelligent entities as computational processes. Now, drawing on fifty years of results of increasing breadth and applicability, we can also characterize AI research as a concrete practice: an ENGINEER-

    6.4h Robotics

    Beavers, Anthony F., Between angels and animals: The question of robot ethics, or is Kantian moral agency desirable?   (Google)
    Abstract: In this paper, I examine a variety of agents that appear in Kantian ethics in order to determine which would be necessary to make a robot a genuine moral agent. However, building such an agent would require that we structure into a robot’s behavioral repertoire the possibility for immoral behavior, for only then can the moral law, according to Kant, manifest itself as an ought, a prerequisite for being able to hold an agent morally accountable for its actions. Since building a moral robot requires the possibility of immoral behavior, I go on to argue that we cannot morally want robots to be genuine moral agents, but only beings that simulate moral behavior. Finally, I raise but do not answer the question that if morality requires us to want robots that are not genuine moral agents, why should we want something different in the case of human beings
    Breazeal, C. & Brooks, Rodney (2004). Robot emotions: A functional perspective. In J. Fellous (ed.), Who Needs Emotions. Oxford University Press.   (Google)
    Brooks, Rodney A. & Stein, Lynn Andrea (1994). Building brains for bodies. Autonomous Robotics 1 (1):7-25.   (Cited by 281 | Google | More links)
    Abstract: We describe a project to capitalize on newly available levels of computational resources in order to understand human cognition. We are building an integrated physical system including vision, sound input and output, and dextrous manipulation, all controlled by a continuously operating large scale parallel MIMD computer. The resulting system will learn to "think" by building on its bodily experiences to accomplish progressively more abstract tasks. Past experience suggests that in attempting to build such an integrated system we will have to fundamentally change the way artificial intelligence, cognitive science, linguistics, and philosophy think about the organization of intelligence. We expect to be able to better reconcile the theories that will be developed with current work in neuroscience
    Brooks, Rodney (1991). Challenges for Complete Creature Architectures. In Jean-Arcady Meyer & Stewart W. Wilson (eds.), From Animals to Animats: Proceedings of The First International Conference on Simulation of Adaptive Behavior (Complex Adaptive Systems). MIT Press.   (Cited by 71 | Google | More links)
    Abstract: boundaries. It is impossible to do good science without having an appreciation for the problems and concepts in the other levels of abstraction (at least in the direction from biology towards physics), but there are whole sets of tools, methods of analysis, theories and explanations within each discipline which do not cross those boundaries
    Brooks, Rodney A.; Breazeal, Cynthia; Marjanovic, Matthew; Scassellati, Brian & Williamson, Matthew (1999). The cog project: Building a humanoid robot. Lecture Notes in Computer Science 1562:52-87.   (Cited by 302 | Google | More links)
    Abstract: To explore issues of developmental structure, physical em- bodiment, integration of multiple sensory and motor systems, and social interaction, we have constructed an upper-torso humanoid robot called Cog. The robot has twenty-one degrees of freedom and a variety of sen- sory systems, including visual, auditory, vestibular, kinesthetic, and tac- tile senses. This chapter gives a background on the methodology that we have used in our investigations, highlights the research issues that have been raised during this project, and provides a summary of both the current state of the project and our long-term goals. We report on a variety of implemented visual-motor routines (smooth-pursuit track- ing, saccades, binocular vergence, and vestibular-ocular and opto-kinetic re?exes), orientation behaviors, motor control techniques, and social be- haviors (pointing to a visual target, recognizing joint attention through face and eye ?nding, imitation of head nods, and regulating interaction through expressive feedback). We further outline a number of areas for future research that will be necessary to build a complete embodied sys- tem
    Bryson, Joanna J. (2006). The attentional spotlight (dennett and the cog project). Minds and Machines 16 (1):21-28.   (Google | More links)
    Cardon, Alain (2006). Artificial consciousness, artificial emotions, and autonomous robots. Cognitive Processing 7 (4):245-267.   (Google | More links)
    Chella, Antonio (2007). Towards robot conscious perception. In Antonio Chella & Riccardo Manzotti (eds.), Artificial Consciousness. Imprint Academic.   (Google)
    Clancey, William (1995). How situated cognition is different from situated robotics. In Luc Steels & Rodney Brooks (eds.), The "Artificial Life" Route to "Artificial Intelligence": Building Situated Embodied Agents. Hillsdale, NJ: Lawrence Erlbaum Associates.   (Google)
    Clark, Andy & Grush, Rick (1999). Towards a cognitive robotics. Adaptive Behavior 7 (1):5-16.   (Cited by 73 | Google | More links)
    Abstract: There is a definite challenge in the air regarding the pivotal notion of internal representation. This challenge is explicit in, e.g., van Gelder, 1995; Beer, 1995; Thelen & Smith, 1994; Wheeler, 1994; and elsewhere. We think it is a challenge that can be met and that (importantly) can be met by arguing from within a general framework that accepts many of the basic premises of the work (in new robotics and in dynamical systems theory) that motivates such scepticism in the first place. Our strategy will be as follows. We begin (Section 1) by offering an account (an example and something close to a definition) of what we shall term Minimal Robust Representationalism (MRR). Sections 2 & 3 address some likely worries and questions about this notion. We end (Section 4) by making explicit the conditions under which, on our account, a science (e.g., robot- ics) may claim to be addressing cognitive phenomena
    Dautenhahn, Kerstin; Ogden, Bernard; Quick, Tom & Ziemke, Tom (2002). From embodied to socially embedded agents: Implications for interaction-aware robots. Cognitive Systems Research 3 (1):397-427.   (Cited by 50 | Google | More links)
    Dennett, Daniel C. (ms). Cog as a thought experiment.   (Cited by 3 | Google | More links)
    Abstract: In her presentation at the Monte Verità workshop, Maja Mataric showed us a videotape of her robots cruising together through the lab, and remarked, aptly: "They're flocking, but that's not what they think they're doing." This is a vivid instance of a phenomenon that lies at the heart of all the research I learned about at Monte Verità: the execution of surprisingly successful "cognitive" behaviors by systems that did not explicitly represent, and did not need to explicitly represent, what they were doing. How "high" in the intuitive scale of cognitive sophistication can such unwitting prowess reach? All the way, apparently, since I want to echo Maja's observation with one of my own: "These roboticists are doing philosophy, but that's not what they think they're doing." It is possible, then, even to do philosophy--that most intellectual of activities--without realizing that that is what you are doing. It is even possible to do it well, for this is a good, new way of addressing antique philosophical puzzles
    Dennett, Daniel C. (1995). Cog: Steps toward consciousness in robots. In Thomas Metzinger (ed.), Conscious Experience. Ferdinand Schoningh.   (Cited by 3 | Google)
    Elton, Matthew (1997). Robots and rights: The ethical demands of artificial agents. Ends and Means 1 (2).   (Cited by 4 | Google)
    Gips, James (1994). Toward the ethical robot. In Kenneth M. Ford, C. Glymour & Patrick Hayes (eds.), Android Epistemology. MIT Press.   (Cited by 19 | Google | More links)
    Hesslow, Germund & Jirenhed, D-A. (2007). The inner world of a simple robot. Journal of Consciousness Studies 14 (7):85-96.   (Google | More links)
    Abstract: The purpose of the paper is to discuss whether a particular robot can be said to have an 'inner world', something that can be taken to be a critical feature of consciousness. It has previously been argued that the mechanism underlying the appearance of an inner world in humans is an ability of our brains to simulate behaviour and perception. A robot has previously been designed in which perception can be simulated. A prima facie case can be made that this robot has an inner world in the same sense as humans. Various objections to this claim are discussed in the paper and it is concluded that the robot, although extremely simple, can easily be improved without adding any new principles, so that ascribing an inner world to it becomes intuitively reasonable
    Holland, Owen & Goodman, Russell B. (2003). Robots with internal models: A route to machine consciousness? Journal of Consciousness Studies 10 (4):77-109.   (Cited by 20 | Google | More links)
    Holland, Owen; Knight, Rob & Newcombe, Richard (2007). The role of the self process in embodied machine consciousness. In Antonio Chella & Riccardo Manzotti (eds.), Artificial Consciousness. Imprint Academic.   (Google)
    Ishiguro, Hiroshi (2006). Android science: Conscious and subconscious recognition. Connection Science 18 (4):319-332.   (Cited by 14 | Google | More links)
    Kitamura, T.; Tahara, T. & Asami, K. (2000). How can a robot have consciousness? Advanced Robotics 14:263-275.   (Cited by 6 | Google | More links)
    Kitamura, T. (2002). What is the self of a robot? On a consciousness architecture for a mobile robot as a model of human consciousness. In Kunio Yasue, Marj Jibu & Tarcisio Della Senta (eds.), No Matter, Never Mind. John Benjamins.   (Google)
    Korienek, Gene & Uzgalis, William L. (2002). Adaptable robots. Metaphilosophy 33 (1-2):83-97.   (Cited by 1 | Google)
    Lacey, Nicola & Lee, M. (2003). The epistemological foundations of artificial agents. Minds and Machines 13 (3):339-365.   (Cited by 1 | Google | More links)
    Abstract:   A situated agent is one which operates within an environment. In most cases, the environment in which the agent exists will be more complex than the agent itself. This means that an agent, human or artificial, which wishes to carry out non-trivial operations in its environment must use techniques which allow an unbounded world to be represented within a cognitively bounded agent. We present a brief description of some important theories within the fields of epistemology and metaphysics. We then discuss ways in which philosophical problems of scepticism are related to the problems faced by knowledge representation. We suggest that some of the methods that philosophers have developed to address the problems of epistemology may be relevant to the problems of representing knowledge within artificial agents
    Menant, Christophe (2005). Information and meaning in life, humans and robots (2005). Proceedings of FIS2005 by MDPI, Basel, Switzerland.   (Google | More links)
    Abstract: Information and meaning exist around us and within ourselves, and the same information can correspond to different meanings. This is true for humans and animals, and is becoming true for robots. We propose here an overview of this subject by using a systemic tool related to meaning generation that has already been published (C. Menant, Entropy 2003). The Meaning Generator System (MGS) is a system submitted to a constraint that generates a meaningful information when it receives an incident information that has a relation with the constraint. The content of the meaningful information is explicited, and its function is to trigger an action that will be used to satisfy the constraint of the system. The MGS has been introduced in the case of basic life submitted to a "stay alive" constraint. We propose here to see how the usage of the MGS can be extended to more complex living systems, to humans and to robots by introducing new types of constraints, and integrating the MGS into higher level systems. The application of the MGS to humans is partly based on a scenario relative to the evolution of body self-awareness toward self-consciousness that has already been presented (C. Menant, Biosemiotics 2003, and TSC 2004). The application of the MGS to robots is based on the definition of the MGS applied to robots functionality, taking into account the origins of the constraints. We conclude with a summary of this overview and with themes that can be linked to this systemic approach on meaning generation
    Minsky, Marvin L. (1994). Will robots inherit the earth? Scientific American (Oct).   (Cited by 37 | Google | More links)
    Abstract: Everyone wants wisdom and wealth. Nevertheless, our health often gives out before we achieve them. To lengthen our lives, and improve our minds, in the future we will need to change our our bodies and brains. To that end, we first must consider how normal Darwinian evolution brought us to where we are. Then we must imagine ways in which future replacements for worn body parts might solve most problems of failing health. We must then invent strategies to augment our brains and gain greater wisdom. Eventually we will entirely replace our brains -- using nanotechnology. Once delivered from the limitations of biology, we will be able to decide the length of our lives--with the option of immortality-- and choose among other, unimagined capabilities as well
    Moravec, Hans (online). Bodies, robots, minds.   (Google)
    Abstract: Serious attempts to build thinking machines began after the second world war. One line of research, called Cybernetics, used electronic circuitry imitating nervous systems to make machines that learned to recognize simple patterns, and turtle-like robots that found their way to recharging plugs. A different approach, named Artificial Intelligence, harnessed the arithmetic power of post-war computers to abstract reasoning, and by the 1960s made computers prove theorems in logic and geometry, solve calculus problems and play good games of checkers. At the end of the 1960s, research groups at MIT and Stanford attached television cameras and robot arms to their computers, so "thinking" programs could begin to collect information directly from the real world
    Moravec, Hans (online). Robotics. Encyclopaedia Britannica Online.   (Google)
    Abstract: the development of machines with motor, perceptual and cognitive skills once found only in animals and humans. The field parallels and has adopted developments from several areas, among them mechanization, automation and artificial intelligence, but adds its own gripping myth, of complete artificial mechanical human beings. Ancient images and figurines depicting animals and humans can be interpreted as steps towards this vision, as can mechanical automata from classical times on. The pace accelerated rapidly in the twentieth century with the development of electronic sensing and amplification that permitted automata to sense and react as well as merely perform. By the late twentieth century automata controlled by computers could also think and remember
    Moravec, Hans (online). Robots inherit human minds.   (Google)
    Abstract: Our first tools, sticks and stones, were very different from ourselves. But many tools now resemble us, in function or form, and they are beginning to have minds. A loose parallel with our own evolution suggests how they may develop in future. Computerless industrial machinery exhibits the behavioral flexibility of single-celled organisms. Today's best computer-controlled robots are like the simpler invertebrates. A thousand-fold increase in computer power in this decade should make possible machines with reptile-like sensory and motor competence. Growing computer power over the next half century will allow robots that learn like mammals, model their world like primates and eventually reason like humans. Depending on your point of view, humanity will then have produced a worthy successor, or transcended inherited limitations and transformed itself into something quite new. No longer limited by the slow pace of human learning and even slower biological evolution, intelligent machinery will conduct its affairs on an ever faster, ever smaller scale, until coarse physical nature has been converted to fine-grained purposeful thought
    Moravec, Hans (1994). The age of robots. In Max More (ed.), Extro 1, Proceedings of the First Extropy Institute Conference on TransHumanist Thought. Extropy Institute.   (Cited by 2 | Google | More links)
    Abstract: _Our artifacts are getting smarter, and a loose parallel with the evolution of animal intelligence suggests one future course_ _for them. Computerless industrial machinery exhibits the behavioral flexibility of single-celled organisms. Today's best_ _computer-controlled robots are like the simpler invertebrates. A thousand-fold increase in computer power in this decade_ _should make possible machines with reptile-like sensory and motor competence. Properly configured, such robots could_ _do in the physical world what personal computers now do in the world of data--act on our behalf as literal-minded_ _slaves. Growing computer power over the next half-century will allow this reptile stage will be surpassed, in stages_ _producing robots that learn like mammals, model their world like primates and eventually reason like humans._ _Depending on your point of view, humanity will then have produced a worthy successor, or transcended inherited_ _limitations and transformed itself into something quite new. No longer limited by the slow pace of human learning and_ _even slower biological evolution, intelligent machinery will conduct its affairs on an ever faster, ever smaller scale, until_ _coarse physical nature has been converted to fine-grained purposeful thought._
    Parisi, Domenico (2007). Mental robotics. In Antonio Chella & Riccardo Manzotti (eds.), Artificial Consciousness. Imprint Academic.   (Google)
    Petersen, Stephen (2007). The ethics of robot servitude. Journal of Experimental and Theoretical Artificial Intelligence 19 (1):43-54.   (Google | More links)
    Abstract: Assume we could someday create artificial creatures with intelligence comparable to our own. Could it be ethical use them as unpaid labor? There is very little philosophical literature on this topic, but the consensus so far has been that such robot servitude would merely be a new form of slavery. Against this consensus I defend the permissibility of robot servitude, and in particular the controversial case of designing robots so that they want to serve (more or less particular) human ends. A typical objection to this case draws an analogy to the genetic engineering of humans: if designing eager robot servants is permissible, it should also be permissible to design eager human servants. Few ethical views can easily explain even the wrongness of such human engineering, however, and those few explanations that are available break the analogy with engineering robots. The case turns out to be illustrative of profound problems in the field of population ethics.
    Schmidt, C. T. A. & Kraemer, F. (2006). Robots, Dennett and the autonomous: A terminological investigation. Minds and Machines 16 (1):73-80.   (Cited by 5 | Google | More links)
    Abstract: In the present enterprise we take a look at the meaning of Autonomy, how the word has been employed and some of the consequences of its use in the sciences of the artificial. Could and should robots really be autonomous entities? Over and beyond this, we use concepts from the philosophy of mind to spur on enquiry into the very essence of human autonomy. We believe our initiative, as does Dennett's life-long research, sheds light upon the problems of robot design with respect to their relation with humans
    Torrance, Steve (1994). The mentality of robots, II. Proceedings of the Aristotelian Society 68 (68):229-262.   (Google)
    Young, R. A. (1994). The mentality of robots, I. Proceedings of the Aristotelian Society 68 (68):199-227.   (Google)
    Ziemke, Tom (2007). What's life got to do with it? In Antonio Chella & Riccardo Manzotti (eds.), Artificial Consciousness. Imprint Academic.   (Google)