Javascript Menu by Deluxe-Menu.com
MindPapers is now part of PhilPapers: online research in philosophy, a new service with many more features.
 
 Compiled by David Chalmers (Editor) & David Bourget (Assistant Editor), Australian National University. Submit an entry.
 
   
click here for help on how to search

6.4g. AI Methodology (AI Methodology on PhilPapers)

See also:
Bickhard, Mark H. (2000). Motivation and Emotion: An Interactive Process Model. In Ralph D. Ellis & Natika Newton (eds.), The Caldron of Consciousness: Motivation, Affect and Self-Organization. John Benjamins.   (Cited by 19 | Google | More links)
Abstract: In this chapter, I outline dynamic models of motivation and emotion. These turn out not to be autonomous subsystems, but, instead, are deeply integrated in the basic interactive dynamic character of living systems. Motivation is a crucial aspect of particular kinds of interactive systems -- systems for which representation is a sister aspect. Emotion is a special kind of partially reflective interaction process, and yields its own emergent motivational aspects. In addition, the overall model accounts for some of the crucial properties of consciousness
Birnbaum, L. (1991). Rigor mortis: A response to Nilsson's 'logic and artificial intelligence'. Artificial Intelligence 47:57-78.   (Cited by 106 | Google | More links)
Chalmers, David J.; French, Robert M. & Hofstadter, Douglas R. (1992). High-level perception, representation, and analogy: A critique of AI methodology. Journal of Experimental and Theoretical Artificial Intelligence.   (Cited by 1 | Annotation | Google)
Chalmers, David J.; French, Robert M. & Hofstadter, Douglas R. (1992). High-level perception, representation, and analogy:A critique of artificial intelligence methodology. Journal of Experimental and Theoretical Artificial Intellige 4 (3):185 - 211.   (Cited by 123 | Google | More links)
Abstract: High-level perception--”the process of making sense of complex data at an abstract, conceptual level--”is fundamental to human cognition. Through high-level perception, chaotic environmen- tal stimuli are organized into the mental representations that are used throughout cognitive pro- cessing. Much work in traditional artificial intelligence has ignored the process of high-level perception, by starting with hand-coded representations. In this paper, we argue that this dis- missal of perceptual processes leads to distorted models of human cognition. We examine some existing artificial-intelligence models--”notably BACON, a model of scientific discovery, and the Structure-Mapping Engine, a model of analogical thought--”and argue that these are flawed pre- cisely because they downplay the role of high-level perception. Further, we argue that perceptu- al processes cannot be separated from other cognitive processes even in principle, and therefore that traditional artificial-intelligence models cannot be defended by supposing the existence of a --œrepresentation module--� that supplies representations ready-made. Finally, we describe a model of high-level perception and analogical thought in which perceptual processing is integrated with analogical mapping, leading to the flexible build-up of representations appropriate to a given context
Clark, Andy (1986). A biological metaphor. Mind and Language 1:45-64.   (Cited by 6 | Annotation | Google | More links)
Clark, Andy (1987). The kludge in the machine. Mind and Language 2:277-300.   (Cited by 12 | Google | More links)
Colombetti, Giovanna (2007). Enactive appraisal. Phenomenology and the Cognitive Sciences.   (Cited by 4 | Google | More links)
Abstract: Emotion theorists tend to separate “arousal” and other bodily events such as “actions” from the evaluative component of emotion known as “appraisal.” This separation, I argue, implies phenomenologically implausible accounts of emotion elicitation and personhood. As an alternative, I attempt a reconceptualization of the notion of appraisal within the so-called “enactive approach.” I argue that appraisal is constituted by arousal and action, and I show how this view relates to an embodied and affective notion of personhood
Colombetti, Giovanna & Thompson, Evan (forthcoming). The feeling body: Towards an enactive approach to emotion. In W. F. Overton, U. Mueller & J. Newman (eds.), Body in Mind, Mind in Body: Developmental Perspectives on Embodiment and Consciousness. Erlbaum.   (Cited by 3 | Google | More links)
Abstract: For many years emotion theory has been characterized by a dichotomy between the head and the body. In the golden years of cognitivism, during the nineteen-sixties and seventies, emotion theory focused on the cognitive antecedents of emotion, the so-called “appraisal processes.” Bodily events were seen largely as byproducts of cognition, and as too unspecific to contribute to the variety of emotion experience. Cognition was conceptualized as an abstract, intellectual, “heady” process separate from bodily events. Although current emotion theory has moved beyond this disembodied stance by conceiving of emotions as involving both cognitive processes (perception, attention, and evaluation) and bodily events (arousal, behavior, and facial expressions), the legacy of cognitivism persists in the tendency to treat cognitive and bodily events as separate constituents of emotion. Thus the cognitive aspects of emotion are supposedly distinct and separate from the bodily ones. This separation indicates that cognitivism’s disembodied conception of cognition continues to shape the way emotion theorists conceptualize emotion
Dascal, M. (1992). Why does language matter to artificial intelligence? Minds and Machines 2 (2):145-174.   (Cited by 7 | Google | More links)
Abstract:   Artificial intelligence, conceived either as an attempt to provide models of human cognition or as the development of programs able to perform intelligent tasks, is primarily interested in theuses of language. It should be concerned, therefore, withpragmatics. But its concern with pragmatics should not be restricted to the narrow, traditional conception of pragmatics as the theory of communication (or of the social uses of language). In addition to that, AI should take into account also the mental uses of language (in reasoning, for example) and the existential dimensions of language as a determiner of the world we (and our computers) live in. In this paper, the relevance of these three branches of pragmatics-sociopragmatics, psychopragmatics, and ontopragmatics-for AI are explored
Dietrich, Eric (1994). AI and the tyranny of Galen, or why evolutionary psychology and cognitive ethology are important to artificial intelligence. Journal of Experimental And Theoretical Artificial Intelligence 6 (4):325-330.   (Google | More links)
Abstract: Concern over the nature of AI is, for the tastes many AI scientists, probably overdone. In this they are like all other scientists. Working scientists worry about experiments, data, and theories, not foundational issues such as what their work is really about or whether their discipline is methodologically healthy. However, most scientists aren’t in a field that is approximately fifty years old. Even relatively new fields such as nonlinear dynamics or branches of biochemistry are in fact advances in older established sciences and are therefore much more settled. Of course, by stretching things, AI can be said to have a history reaching back t o Charles Babbage, and possibly back beyond that to Leibnitz. However, all of that is best viewed as prelude. AI’s history is punctuated with the invention of the computer (and, if one wants t o stretch our history back to the 1930s, the development of the notion of computation by Turing, Church, and others). Hence, AI really began (or began in earnest) sometime in the late 1940s or early 1950s (some mark the conference a t Dartmouth in the summer of 1957 as the moment of our birth). And since those years we simply have not had time to settle into a routine science attacking reasonably well understood questions (for example, many of the questions some of us regard as supreme are regarded by others as inconsequential or mere excursions)
Dreyfus, Hubert L. (1981). From micro-worlds to knowledge: AI at an impasse. In J. Haugel (ed.), Mind Design. MIT Press.   (Annotation | Google)
Dreyfus, Hubert L. & Dreyfus, Stuart E. (1988). Making a mind versus modeling the brain: AI at a crossroads. Daedalus.   (Cited by 6 | Annotation | Google)
Dreyfus, Hubert L. (2007). Why Heideggerian ai failed and how fixing it would require making it more Heideggerian. Philosophical Psychology 20 (2):247 – 268.   (Cited by 2 | Google | More links)
Elster, Jon (1996). Rationality and the emotions. Economic Journal 106:1386-97.   (Cited by 63 | Google | More links)
Abstract: In an earlier paper (Elster, 1989 a), I discussed the relation between rationality and social norms. Although I did mention the role of the emotions in sustaining social norms, I did not focus explicitly on the relation between rationality and the emotions. That relation is the main topic of the present paper, with social norms in a subsidiary part
Flach, P. A. (ed.) (1991). Future Directions in Artificial Intelligence. New York: Elsevier Science.   (Cited by 2 | Google)
Fulda, Joseph S. (2006). A Plea for Automated Language-to-Logical-Form Converters. RASK: Internationalt tidsskrift for sprog og kommuinkation 24 (--):87-102.   (Google)
Griffiths, Paul E. & Scarantino, Andrea (2005). Emotions in the Wild: The Situated Perspective on Emotion. In P. Robbins & Murat Aydede (eds.), The Cambridge Handbook of Situated Cognition. Cambridge University Press.   (Cited by 2 | Google | More links)
Abstract: Paul E Griffiths Biohumanities Project University of Queensland St Lucia 4072 Australia paul.griffiths@uq.edu.au
Hadley, Robert F. (1991). The many uses of 'belief' in AI. Minds and Machines 1 (1):55-74.   (Cited by 2 | Annotation | Google | More links)
Abstract:   Within AI and the cognitively related disciplines, there exist a multiplicity of uses of belief. On the face of it, these differing uses reflect differing views about the nature of an objective phenomenon called belief. In this paper I distinguish six distinct ways in which belief is used in AI. I shall argue that not all these uses reflect a difference of opinion about an objective feature of reality. Rather, in some cases, the differing uses reflect differing concerns with special AI applications. In other cases, however, genuine differences exist about the nature of what we pre-theoretically call belief. To an extent the multiplicity of opinions about, and uses of belief, echoes the discrepant motivations of AI researchers. The relevance of this discussion for cognitive scientists and philosophers arises from the fact that (a) many regard theoretical research within AI as a branch of cognitive science, and (b) even if theoretical AI is not cognitive science, trends within AI influence theories developed within cognitive science. It should be beneficial, therefore, to unravel the distinct uses and motivations surrounding belief, in order to discover which usages merely reflect differing pragmatic concerns, and which usages genuinely reflect divergent views about reality
Haugeland, John (1979). Understanding natural language. Journal of Philosophy 76 (November):619-32.   (Cited by 12 | Annotation | Google | More links)
Kirsh, David (1991). Foundations of AI: The big issues. Artificial Intelligence 47:3-30.   (Cited by 46 | Annotation | Google | More links)
Abstract: The objective of research in the foundations of Al is to explore such basic questions as: What is a theory in Al? What are the most abstract assumptions underlying the competing visions of intelligence? What are the basic arguments for and against each assumption? In this essay I discuss five foundational issues: (1) Core Al is the study of conceptualization and should begin with knowledge level theories. (2) Cognition can be studied as a disembodied process without solving the symbol grounding problem. (3) Cognition is nicely described in propositional terms. (4) We can study cognition separately from learning. (5) There is a single architecture underlying virtually all cognition. I explain what each of these implies and present arguments from both outside and inside Al why each has been seen as right or wrong.
Kobsa, Alfred (1987). What is explained by AI models. In Artificial Intelligence. St Martin's Press.   (Cited by 2 | Google)
Labuschagne, Willem A. & Heidema, Johannes (2005). Natural and artificial cognition: On the proper place of reason. South African Journal of Philosophy 24 (2):137-149.   (Cited by 1 | Google | More links)
Marr, David (1977). Artificial intelligence: A personal view. Artificial Intelligence 9 (September):37-48.   (Cited by 131 | Annotation | Google | More links)
McDermott, Drew (1987). A critique of pure reason. Computational Intelligence 3:151-60.   (Cited by 141 | Annotation | Google | More links)
McDermott, Drew (1981). Artificial intelligence meets natural stupidity. In J. Haugel (ed.), Mind Design. MIT Press.   (Cited by 99 | Annotation | Google)
Nilsson, Neil (1991). Logic and artificial intelligence. Artificial Intelligence 47:31-56.   (Cited by 123 | Google | More links)
Partridge, Derek & Wilks, Y. (eds.) (1990). The Foundations of Artificial Intelligence: A Sourcebook. Cambridge University Press.   (Cited by 19 | Annotation | Google | More links)
Abstract: This outstanding collection is designed to address the fundamental issues and principles underlying the task of Artificial Intelligence.
Petersen, Stephen (2004). Functions, creatures, learning, emotion. Hudlicka and Canamero.   (Cited by 2 | Google)
Abstract: I propose a conceptual framework for emotions according to which they are best understood as the feedback mechanism a creature possesses in virtue of its function to learn. More specifically, emotions can be neatly modeled as a measure of harmony in a certain kind of constraint satisfaction problem. This measure can be used as error for weight adjustment (learning) in an unsupervised connectionist network.
Preston, Beth (1993). Heidegger and artificial intelligence. Philosophy and Phenomenological Research 53 (1):43-69.   (Cited by 4 | Annotation | Google | More links)
Pylyshyn, Zenon W. (1979). Complexity and the study of artificial and human intelligence. In Martin Ringle (ed.), Philosophical Perspectives in Artificial Intelligence. Humanities Press.   (Cited by 15 | Google)
Ringle, Martin (ed.) (1979). Philosophical Perspectives in Artificial Intelligence. Humanities Press.   (Cited by 5 | Annotation | Google)
Robinson, William S. (1991). Rationalism, expertise, and the dreyfuses' critique of AI research. Southern Journal of Philosophy 29:271-90.   (Annotation | Google)
Shaffer, Michael J. (2009). Decision theory, intelligent planning and counterfactuals. Minds and Machines 19 (1):61-92.   (Google)
Abstract: The ontology of decision theory has been subject to considerable debate in the past, and discussion of just how we ought to view decision problems has revealed more than one interesting problem, as well as suggested some novel modifications of classical decision theory. In this paper it will be argued that Bayesian, or evidential, decision-theoretic characterizations of decision situations fail to adequately account for knowledge concerning the causal connections between acts, states, and outcomes in decision situations, and so they are incomplete. Second, it will be argues that when we attempt to incorporate the knowledge of such causal connections into Bayesian decision theory, a substantial technical problem arises for which there is no currently available solution that does not suffer from some damning objection or other. From a broader perspective, this then throws into question the use of decision theory as a model of human or machine planning
Sticklen, J. (1989). Problem-solving architectures at the knowledge level. Journal of Experimental and Theoretical Artificial Intelligence 1:233-247.   (Cited by 19 | Google | More links)
Stone, Matthew, Agents in the real world.   (Google)
Abstract: The mid-twentieth century saw the introduction of a new general model of processes, COMPUTATION, with the work of scientists such as Turing, Chomsky, Newell and Simon.1 This model so revolutionized the intellectual world that the dominant scientific programs of the day—spearheaded by such eminent scientists as Hilbert, Bloomfield and Skinner—are today remembered as much for the way computation exposed their stark limitations as for their positive contributions.2 Ever since, the field of Artificial Intelligence (AI) has defined itself as the subfield of computer science dedicated to the understanding of intelligent entities as computational processes. Now, drawing on fifty years of results of increasing breadth and applicability, we can also characterize AI research as a concrete practice: an ENGINEER-