Javascript Menu by
MindPapers is now part of PhilPapers: online research in philosophy, a new service with many more features.
 Compiled by David Chalmers (Editor) & David Bourget (Assistant Editor), Australian National University. Submit an entry.
click here for help on how to search

7.1j. Computationalism in Cognitive Science (Computationalism in Cognitive Science on PhilPapers)

See also:
Agre, Philip E. (2002). The practical logic of computer work. In Matthias Scheutz (ed.), Computationalism: New Directions. MIT Press.   (Cited by 1 | Google | More links)
Antony, Louise M. (1997). Feeling fine about the mind. Philosophy and Phenomenological Research 57 (2):381-87.   (Cited by 1 | Google | More links)
Balog, Katalin (2009). Jerry Fodor on Non-conceptual Content. Synthese 167 (3).   (Google | More links)
Abstract: Proponents of non-conceptual content have recruited it for various philosophical jobs. Some epistemologists have suggested that it may play the role of “the given” that Sellars is supposed to have exorcised from philosophy. Some philosophers of mind (e.g., Dretske) have suggested that it plays an important role in the project of naturalizing semantics as a kind of halfway between merely information bearing and possessing conceptual content. Here I will focus on a recent proposal by Jerry Fodor. In a recent paper he characterizes non-conceptual content in a particular way and argues that it is plausible that it plays an explanatory role in accounting for certain auditory and visual phenomena. So he thinks that there is reason to believe that there is non-conceptual content. On the other hand, Fodor thinks that non-conceptual content has a limited role. It occurs only in the very early stages of perceptual processing prior to conscious awareness. My paper is examines Fodor’s characterization of non-conceptual content and his claims for its explanatory importance. I also discuss if Fodor has made a case for limiting non-conceptual content to non-conscious, sub-personal mental states.
Bickhard, Mark H. (1996). Troubles with computationalism. In W. O'Donahue & Richard F. Kitchener (eds.), The Philosophy of Psychology. Sage Publications.   (Cited by 19 | Google)
Block, Ned (1990). The computer model of mind. In Daniel N. Osherson & Edward E. Smith (eds.), An Invitation to Cognitive Science. MIT Press.   (Cited by 1 | Annotation | Google)
Block, Ned (1995). The mind as the software of the brain. In Daniel N. Osherson, Lila Gleitman, Stephen M. Kosslyn, S. Smith & Saadya Sternberg (eds.), An Invitation to Cognitive Science. MIT Press.   (Cited by 57 | Google)
Abstract: In this section, we will start with an influential attempt to define `intelligence', and then we will move to a consideration of how human intelligence is to be investigated on the machine model. The last part of the section will discuss the relation between the mental and the biological
Boden, Margaret A. (1988). Computer Models On Mind: Computational Approaches In Theoretical Psychology. Cambridge University Press.   (Cited by 64 | Google | More links)
Abstract: What is the mind? How does it work? How does it influence behavior? Some psychologists hope to answer such questions in terms of concepts drawn from computer science and artificial intelligence. They test their theories by modeling mental processes in computers. This book shows how computer models are used to study many psychological phenomena--including vision, language, reasoning, and learning. It also shows that computer modeling involves differing theoretical approaches. Computational psychologists disagree about some basic questions. For instance, should the mind be modeled by digital computers, or by parallel-processing systems more like brains? Do computer programs consist of meaningless patterns, or do they embody (and explain) genuine meaning?
Boden, Margaret A. (1981). Minds And Mechanisms: Philosophical Psychology And Computational Models. Ithaca: Cornell University Press.   (Cited by 17 | Google)
Boden, Margaret A. (1979). The computational metaphor in psychology. In Philosophical Problems In Psychology. London: Methuen.   (Cited by 4 | Google)
Boden, Margaret A. (1984). What is computational psychology? Proceedings of the Aristotelian Society 58:17-35.   (Cited by 4 | Google)
Boden, Margaret A. (1984). What is computational psychology, part I. Proceedings of the Aristotelian Society 17:17-36.   (Google)
Bringsjord, Selmer (1994). Computation, among other things, is beneath us. Minds and Machines 4 (4):469-88.   (Cited by 13 | Google | More links)
Abstract:   What''s computation? The received answer is that computation is a computer at work, and a computer at work is that which can be modelled as a Turing machine at work. Unfortunately, as John Searle has recently argued, and as others have agreed, the received answer appears to imply that AI and Cog Sci are a royal waste of time. The argument here is alarmingly simple: AI and Cog Sci (of the Strong sort, anyway) are committed to the view that cognition is computation (or brains are computers); butall processes are computations (orall physical things are computers); so AI and Cog Sci are positively silly.I refute this argument herein, in part by defining the locutions x is a computer and c is a computation in a way that blocks Searle''s argument but exploits the hard-to-deny link between What''s Computation? and the theory of computation. However, I also provide, at the end of this essay, an argument which, it seems to me, implies not that AI and Cog Sci are silly, but that they''re based on a form of computation that is well beneath human persons
Bringsjord, Selmer (1998). Cognition is not computation: The argument from irreversibility. Synthese 113 (2):285-320.   (Cited by 11 | Google | More links)
Abstract:   The dominant scientific and philosophical view of the mind – according to which, put starkly, cognition is computation – is refuted herein, via specification and defense of the following new argument: Computation is reversible; cognition isn't; ergo, cognition isn't computation. After presenting a sustained dialectic arising from this defense, we conclude with a brief preview of the view we would put in place of the cognition-is-computation doctrine
Bringsjord, Selmer (2000). Clarifying the logic of anti-computationalism: Reply to Hauser. Minds and Machines 10 (1):111-113.   (Cited by 1 | Google | More links)
Bringsjord, Selmer (2001). In computation, parallel is nothing, physical everything. Minds and Machines 11 (1):95-99.   (Cited by 7 | Google | More links)
Abstract:   Andrew Boucher (1997) argues that ``parallel computation is fundamentally different from sequential computation'' (p. 543), and that this fact provides reason to be skeptical about whether AI can produce a genuinely intelligent machine. But parallelism, as I prove herein, is irrelevant. What Boucher has inadvertently glimpsed is one small part of a mathematical tapestry portraying the simple but undeniable fact that physical computation can be fundamentally different from ordinary, ``textbook'' computation (whether parallel or sequential). This tapestry does indeed immediately imply that human cognition may be uncomputable
Bringsjord, Selmer (2004). The modal argument for hypercomputing minds. Theoretical Computer Science 317.   (Cited by 8 | Google | More links)
Bryant, Antony (2003). Cognitive informatics, distributed representation and embodiment. Brain and Mind 4 (2):215-228.   (Google | More links)
Abstract: This paper is a revised and extended version of a keynote contribution to a recent conference on Cognitive Informatics. It offers a brief summary of some of the core concerns of other contributions to the conference, highlighting the range of issues under discussion; and argues that many of the central concepts and preoccupations of cognitive informatics as understood by participants--and others in the general field of computation--rely on ill-founded realist assumptions, and what has been termed the functionalist view of representation. Even if such ideas--albeit in a revised form -- can be defended, there must be a more extensive engagement with the literature and issues outside the confines of the computing and computational orthodoxy
Buller, David J. (1993). Confirmation and the computational paradigm, or, why do you think they call it artificial intelligence? Minds and Machines 3 (2):155-81.   (Cited by 1 | Google | More links)
Abstract:   The idea that human cognitive capacities are explainable by computational models is often conjoined with the idea that, while the states postulated by such models are in fact realized by brain states, there are no type-type correlations between the states postulated by computational models and brain states (a corollary of token physicalism). I argue that these ideas are not jointly tenable. I discuss the kinds of empirical evidence available to cognitive scientists for (dis)confirming computational models of cognition and argue that none of these kinds of evidence can be relevant to a choice among competing computational models unless there are in fact type-type correlations between the states postulated by computational models and brain states. Thus, I conclude, research into the computational procedures employed in human cognition must be conducted hand-in-hand with research into the brain processes which realize those procedures
Cantwell Smith, Brian (2002). The foundations of computing. In Matthias Scheutz (ed.), Computationalism: New Directions. MIT Press.   (Cited by 1 | Google)
Cela-Conde, Camilo J. & Marty, Gisèle (1997). Mind architecture and brain architecture. Biology and Philosophy 12 (3):327-340.   (Cited by 1 | Google | More links)
Abstract:   The use of the computer metaphor has led to the proposal of mind architecture (Pylyshyn 1984; Newell 1990) as a model of the organization of the mind. The dualist computational model, however, has, since the earliest days of psychological functionalism, required that the concepts mind architecture and brain architecture be remote from each other. The development of both connectionism and neurocomputational science, has sought to dispense with this dualism and provide general models of consciousness – a uniform cognitive architecture –, which is in general reductionist, but which retains the computer metaphor. This paper examines, in the first place, the concepts of mind architecture and brain architecture, in order to evaluate the syntheses which have recently been offered. It then moves on to show how modifications which have been made to classical functionalist mind architectures, with the aim of making them compatible with brain architectures, are unable to resolve some of the most serious problems of functionalism. Some suggestions are given as to why it is not possible to relate mind structures and brain structures by using neurocomputational approaches, and finally the question is raised of the validity of reductionism in a theory which sets out to unite mind and brain architectures
Chalmers, David J. (ms). A computational foundation for the study of cognition.   (Cited by 44 | Annotation | Google | More links)
Abstract: Computation is central to the foundations of modern cognitive science, but its role is controversial. Questions about computation abound: What is it for a physical system to implement a computation? Is computation sufficient for thought? What is the role of computation in a theory of cognition? What is the relation between different sorts of computational theory, such as connectionism and symbolic computation? In this paper I develop a systematic framework that addresses all of these questions. Justifying the role of computation requires analysis of implementation, the nexus between abstract computations and concrete physical systems. I give such an analysis, based on the idea that a system implements a computation if the causal structure of the system mirrors the formal structure of the computation. This account can be used to justify the central commitments of artificial intelligence and computational cognitive science: the thesis of computational sufficiency, which holds that the right kind of computational structure suffices for the possession of a mind, and the thesis of computational explanation, which holds that computation provides a general framework for the explanation of cognitive processes. The theses are consequences of the facts that (a) computation can specify general patterns of causal organization, and (b) mentality is an organizational invariant, rooted in such patterns. Along the way I answer various challenges to the computationalist position, such as those put forward by Searle. I close by advocating a kind of minimal computationalism, compatible with a very wide variety of empirical approaches to the mind. This allows computation to serve as a true foundation for cognitive science
Chalmers, David J. (ms). A computational foundation for the study of cognition.   (Google | More links)
Abstract: Computation is central to the foundations of modern cognitive science, but its role is controversial. Questions about computation abound: What is it for a physical system to implement a computation? Is computation sufficient for thought? What is the role of computation in a theory of cognition? What is the relation between different sorts of computational theory, such as connectionism and symbolic computation? In this paper I develop a systematic framework that addresses all of these questions. Justifying the role of computation requires analysis of implementation, the nexus between abstract computations and concrete physical systems. I give such an analysis, based on the idea that a system implements a computation if the causal structure of the system mirrors the formal structure of the computation. This account can be used to justify the central commitments of artificial intelligence and computational cognitive science: the thesis of computational sufficiency, which holds that the right kind of computational structure suffices for the possession of a mind, and the thesis of computational explanation, which holds that computation provides a general framework for the explanation of cognitive processes. The theses are consequences of the facts that (a) computation can specify general patterns of causal organization, and (b) mentality is an organizational invariant, rooted in such patterns. Along the way I answer various challenges to the computationalist position, such as those put forward by Searle. I close by advocating a kind of minimal computationalism, compatible with a very wide variety of empirical approaches to the mind. This allows computation to serve as a true foundation for cognitive science
Cherniak, Christopher (1988). Undebuggability and cognitive science. Communications Of The ACM 31 (4):402-416.   (Cited by 13 | Google | More links)
Clark, Austen (1984). Seeing and summing: Implications of computational theories of vision. Cognition and Brain Theory 7 (1):1-23.   (Google)
Abstract: Marr's computational theory of stereopsis is shown to imply that human vision employs a system of representation which has all the properties of a number system. Claims for an internal number system and for neural computation should be taken literally. I show how these ideas withstand various skeptical attacks, and analyze the requirements for describing neural operations as computations. Neural encoding of numerals is shown to be distinct from our ability to measure visual physiology. The constructs in Marr's theory are neither propositional nor pictorial, and provide a counter example to many commonly held dichotomies concerning mental representation
Clarke, J. J. (1972). Turing machines and the mind-body problem. British Journal for the Philosophy of Science 23 (February):1-12.   (Cited by 1 | Google | More links)
Copeland, Jack (2002). Narrow versus wide mechanism. In Matthias Scheutz (ed.), Computationalism: New Directions. MIT Press.   (Cited by 42 | Google | More links)
Copeland, Jack (1994). Turing, Wittgenstein, and the science of the mind. Australasian Journal of Philosophy 72 (4):497-519.   (Cited by 2 | Google)
Cummins, Robert E. (1977). Programs in the explanation of behavior. Philosophy of Science 44 (June):269-87.   (Cited by 12 | Google | More links)
Demopoulos, William (1987). On some fundamental distinctions of computationalism. Synthese 70 (January):79-96.   (Cited by 9 | Annotation | Google | More links)
Abstract:   The following paper presents a characterization of three distinctions fundamental to computationalism, viz., the distinction between analog and digital machines, representation and nonrepresentation-using systems, and direct and indirect perceptual processes. Each distinction is shown to rest on nothing more than the methodological principles which justify the explanatory framework of the special sciences
Dietrich, Eric (1990). Computationalism. Social Epistemology 4 (2):135-154.   (Cited by 58 | Annotation | Google)
Dietrich, Eric (2000). Cognitive science and the mechanistic forces of darkness. TechnC) 5 (2).   (Google)
Dietrich, Eric (2001). It does so: Review of The Mind Doesn't Work That Way: The Scope and Limits of Computational Psychology. AI Magazine 22 (4):141-144.   (Google)
Abstract: Objections to AI and computational cognitive science are myriad. Accordingly, there are many different reasons for these attacks. But all of them come down to one simple observation: humans seem a lot smarter that computers -- not just smarter as in Einstein was smarter than I, or I am smarter than a chimpanzee, but more like I am smarter than a pencil sharpener. To many, computation seems like the wrong paradigm for studying the mind. (Actually, I think there are deeper and darker reasons why AI, especially, is so often the brunt of polemics, see Dietrich, 2000.) But the truth is this: AI is making exciting progress, and will one day make a robot as intelligent as a person; indeed the robot will be conscious. And all this is because of another truth: the computational paradigm is the best thing to come down the pike since the wheel
Dietrich, Eric (1990). Replies to my computational commentators. Social Epistemology 369 (October-December):369-375.   (Google)
Dietrich, Eric (1989). Semantics and the computational paradigm in computational psychology. Synthese 79 (April):119-41.   (Cited by 44 | Annotation | Google | More links)
Abstract:   There is a prevalent notion among cognitive scientists and philosophers of mind that computers are merely formal symbol manipulators, performing the actions they do solely on the basis of the syntactic properties of the symbols they manipulate. This view of computers has allowed some philosophers to divorce semantics from computational explanations. Semantic content, then, becomes something one adds to computational explanations to get psychological explanations. Other philosophers, such as Stephen Stich, have taken a stronger view, advocating doing away with semantics entirely. This paper argues that a correct account of computation requires us to attribute content to computational processes in order to explain which functions are being computed. This entails that computational psychology must countenance mental representations. Since anti-semantic positions are incompatible with computational psychology thus construed, they ought to be rejected. Lastly, I argue that in an important sense, computers are not formal symbol manipulators
Double, Richard (1987). The computational model of the mind and philosophical functionalism. Behaviorism 15:131-39.   (Google)
Dreyfus, Hubert L. & Haugeland, John (1974). The computer as a mistaken model of the mind. In Philosophy Of Psychology. Macmillan.   (Cited by 4 | Google)
Endicott, Ronald P. (1996). Searle, syntax, and observer-relativity. Canadian Journal of Philosophy 26 (1):101-22.   (Cited by 3 | Google)
Abstract: I critically examine some provocative arguments that John Searle presents in his book The Rediscovery of Mind to support the claim that the syntactic states of a classical computational system are "observer relative" or "mind dependent" or otherwise less than fully and objectively real. I begin by explaining how this claim differs from Searle's earlier and more well-known claim that the physical states of a machine, including the syntactic states, are insufficient to determine its semantics. In contrast, his more recent claim concerns the syntax, in particular, whether a machine actually has symbols to underlie its semantics. I then present and respond to a number of arguments that Searle offers to support this claim, including whether machine symbols are observer relative because the assignment of syntax is arbitrary, or linked to universal realizability, or linked to the sub-personal interpretive acts of a homunculus, or linked to a person's consciousness. I conclude that a realist about the computational model need not be troubled by such arguments. Their key premises need further support.
Ernandes, Marco (2005). Artificial intelligence & games: Should computational psychology be revalued? Topoi 24 (2):229-242.   (Google | More links)
Abstract: The aims of this paper are threefold: To show that game-playing (GP), the discipline of Artificial Intelligence (AI) concerned with the development of automated game players, has a strong epistemological relevance within both AI and the vast area of cognitive sciences. In this context games can be seen as a way of securely reducing (segmenting) real-world complexity, thus creating the laboratory environment necessary for testing the diverse types and facets of intelligence produced by computer models. This paper aims to promote the belief that games represent an excellent tool for the project of computational psychology (CP). To underline how, despite this, GP has mainly adopted an engineering-inspired methodology and in doing so has distorted the framework of cognitive functionalism. Many successes (i.e. chess, checkers) have been achieved refusing human-like reasoning. The AI has appeared to work well despite ignoring an intrinsic motivation, that of creating an explanatory link between machines and mind. To assert that substantial improvements in GP may be obtained in the future only by renewed interest in human-inspired models of reasoning and in other cognitive studies. In fact, if we increase the complexity of games (from NP-Completeness to AI-Completeness) in order to reproduce real-life problems, computer science techniques enter an impasse. Many of AI’s recent GP experiences can be shown to validate this. The lack of consistent philosophical foundations for cognitive AI and the minimal philosophical commitment of AI investigation are two of the major reasons that play an important role in explaining why CP has been overlooked
Fellows, Roger (1995). Welcome to wales: Searle on the computational theory of mind. In Philosophy and Technology. New York: Cambridge University Press.   (Cited by 1 | Google)
Fernandez, Jordi (2003). Explanation by computer simulation in cognitive science. Minds And Machines 13 (2):269-284.   (Cited by 1 | Google | More links)
Abstract:   My purpose in this essay is to clarify the notion of explanation by computer simulation in artificial intelligence and cognitive science. My contention is that computer simulation may be understood as providing two different kinds of explanation, which makes the notion of explanation by computer simulation ambiguous. In order to show this, I shall draw a distinction between two possible ways of understanding the notion of simulation, depending on how one views the relation in which a computing system that performs a cognitive task stands to the program that the system runs while performing that task. Next, I shall suggest that the kind of explanation that results from simulation is radically different in each case. In order to illustrate the difference, I will point out some prima facie methodological difficulties that need to be addressed in order to ensure that simulation plays a legitimate explanatory role in cognitive science, and I shall emphasize how those difficulties are very different depending on the notion of explanation involved
Fetzer, James H. (2000). Computing is at best a special kind of thinking. In The Proceedings of the Twentieth World Congress of Philosophy, Volume 9: Philosophy of Mind. Charlottesville: Philosophy Doc Ctr.   (Google)
Fetzer, James H. (1994). Mental algorithms: Are minds computational systems? Pragmatics and Cognition 21:1-29.   (Cited by 22 | Google)
Fetzer, James H. (1997). Thinking and computing: Computers as special kinds of signs. Minds and Machines 7 (3):345-364.   (Cited by 9 | Google | More links)
Abstract:   Cognitive science has been dominated by the computational conception that cognition is computation across representations. To the extent to which cognition as computation across representations is supposed to be a purposive, meaningful, algorithmic, problem-solving activity, however, computers appear to be incapable of cognition. They are devices that can facilitate computations on the basis of semantic grounding relations as special kinds of signs. Even their algorithmic, problem-solving character arises from their interpretation by human users. Strictly speaking, computers as such — apart from human users — are not only incapable of cognition, but even incapable of computation, properly construed. If we want to understand the nature of thought, then we have to study thinking, not computing, because they are not the same thing
Figdor, Carrie (2009). Semantic externalism and the mechanics of thought. Minds and Machines 19 (1):1-24.   (Google)
Abstract: I review a widely accepted argument to the conclusion that the contents of our beliefs, desires and other mental states cannot be causally efficacious in a classical computational model of the mind. I reply that this argument rests essentially on an assumption about the nature of neural structure that we have no good scientific reason to accept. I conclude that computationalism is compatible with wide semantic causal efficacy, and suggest how the computational model might be modified to accommodate this possibility
Fodor, Jerry A. (1978). Computation and reduction. Minnesota Studies in the Philosophy of Science 9.   (Cited by 10 | Google)
Fodor, Jerry A. (2000). The Mind Doesn't Work That Way: The Scope and Limits of Computational Psychology. MIT Press.   (Cited by 224 | Google | More links)
Abstract: Jerry Fodor argues against the widely held view that mental processes are largely computations, that the architecture of cognition is massively modular, and...
Garson, James W. (1993). Mice in mirrored mazes and the mind. Philosophical Psychology 6 (2):123-34.   (Annotation | Google)
Abstract: The computational theory of cognition (CTC) holds that the mind is akin to computer software. This article aims to show that CTC is incorrect because it is not able to distinguish the ability to solve a maze from the ability to solve its mirror image. CTC cannot do so because it only individuates brain states up to isomorphism. It is shown that a finer individuation that would distinguish left-handed from right-handed abilities is not compatible with CTC. The view is explored that CTC correctly individuates in an autonomous domain of the mental, leaving discrimination between left and right to some non-cognitive component of psychology such as physiology. I object by showing that the individuation provided by CTC does not properly describe in any domain. An embodied computational taxonomy, rather than software alone, is required for an adequate science of the mind
George, F. H. (1962). The Brain As A Computer. Addison-Wesley.   (Cited by 16 | Google)
Green, Christopher D. (2000). Is AI the right method for cognitive science? Psycoloquy 11 (61).   (Cited by 29 | Google | More links)
Grush, Rick & Churchland, Patricia S. (1998). Computation and the brain. In Robert A. Wilson & Frank F. Keil (eds.), MIT Encyclopedia of the Cognitive Sciences (MITECS). MIT Press.   (Google | More links)
Abstract: Two very different insights motivate characterizing the brain as a computer. One depends on mathematical theory that defines computability in a highly abstract sense. Here the foundational idea is that of a Turing machine. Not an actual machine, the Turing machine is really a conceptual way of making the point that any well-defined function could be executed, step by step, according to simple 'if-you-are-in-state-P-and-have-input-Q-then-do-R' rules, given enough time (maybe infinite time) [see COMPUTATION]. Insofar as the brain is a device whose input and output can be characterized in terms of some mathematical function -- however complicated -- then in that very abstract sense, it can be mimicked by a Turning machine. Given what is known so far brains do seem to depend on cause-effect operations, and hence brains appear to be, in some formal sense, equivalent to a Turing machine [see CHURCH-TURING THESIS]. On its own, however, this reveals nothing at all of how the mind-brain actually works. The second insight depends on looking at the brain as a biological device that processes information from the environment to build complex representations that enable the brain to make predictions and select advantageous behaviors. Where necessary to avoid ambiguity, we will refer to the first notion of computation as..
Harnad, Stevan (1994). Computation is just interpretable symbol manipulation; cognition isn't. Minds and Machines 4 (4):379-90.   (Cited by 30 | Google | More links)
Abstract:   Computation is interpretable symbol manipulation. Symbols are objects that are manipulated on the basis of rules operating only on theirshapes, which are arbitrary in relation to what they can be interpreted as meaning. Even if one accepts the Church/Turing Thesis that computation is unique, universal and very near omnipotent, not everything is a computer, because not everything can be given a systematic interpretation; and certainly everything can''t be givenevery systematic interpretation. But even after computers and computation have been successfully distinguished from other kinds of things, mental states will not just be the implementations of the right symbol systems, because of the symbol grounding problem: The interpretation of a symbol system is not intrinsic to the system; it is projected onto it by the interpreter. This is not true of our thoughts. We must accordingly be more than just computers. My guess is that the meanings of our symbols are grounded in the substrate of our robotic capacity to interact with that real world of objects, events and states of affairs that our symbols are systematically interpretable as being about
Haugeland, John (2002). Authentic intentionality. In Matthias Scheutz (ed.), Computationalism: New Directions. MIT Press.   (Google)
Hershfield, Jeffrey (1998). Cognitivism and explanatory relativity. Canadian Journal of Philosophy 28 (4):505-526.   (Google)
Hershfield, Jeffrey (2005). Is there life after the death of the computational theory of mind? Minds and Machines 15 (2):183-194.   (Google | More links)
Higginbotham, James T. (1986). Comments on Peacocke's explanation in computational psychology. Mind and Language 1:358-361.   (Google)
Horst, Steven (1999). Symbols and computation: A critique of the computational theory of mind. Minds and Machines 9 (3):347-381.   (Cited by 2 | Google | More links)
Abstract:   Over the past several decades, the philosophical community has witnessed the emergence of an important new paradigm for understanding the mind.1 The paradigm is that of machine computation, and its influence has been felt not only in philosophy, but also in all of the empirical disciplines devoted to the study of cognition. Of the several strategies for applying the resources provided by computer and cognitive science to the philosophy of mind, the one that has gained the most attention from philosophers has been the Computational Theory of Mind (CTM). CTM was first articulated by Hilary Putnam (1960, 1961), but finds perhaps its most consistent and enduring advocate in Jerry Fodor (1975, 1980, 1981, 1987, 1990, 1994). It is this theory, and not any broader interpretations of what it would be for the mind to be a computer, that I wish to address in this paper. What I shall argue here is that the notion of symbolic representation employed by CTM is fundamentally unsuited to providing an explanation of the intentionality of mental states (a major goal of CTM), and that this result undercuts a second major goal of CTM, sometimes refered to as the vindication of intentional psychology. This line of argument is related to the discussions of derived intentionality by Searle (1980, 1983, 1984) and Sayre (1986, 1987). But whereas those discussions seem to be concerned with the causal dependence of familiar sorts of symbolic representation upon meaning-bestowing acts, my claim is rather that there is not one but several notions of meaning to be had, and that the notions that are applicable to symbols are conceptually dependent upon the notion that is applicable to mental states in the fashion that Aristotle refered to as paronymy. That is, an analysis of the notions of meaning applicable to symbols reveals that they contain presuppositions about meaningful mental states, much as Aristotle's analysis of the sense of healthy that is applied to foods reveals that it means conducive to having a healthy body, and hence any attempt to explain mental semantics in terms of the semantics of symbols is doomed to circularity and regress. I shall argue, however, that this does not have the consequence that computationalism is bankrupt as a paradigm for cognitive science, as it is possible to reconstruct CTM in a fashion that avoids these difficulties and makes it a viable research framework for psychology, albeit at the cost of losing its claims to explain intentionality and to vindicate intentional psychology. I have argued elsewhere (Horst, 1996) that local special sciences such as psychology do not require vindication in the form of demonstrating their reducibility to more fundamental theories, and hence failure to make good on these philosophical promises need not compromise the broad range of work in empirical cognitive science motivated by the computer paradigm in ways that do not depend on these problematic treatments of symbols
Horst, Steven (1996). Symbols, Computation, and Intentionality: A Critique of the Computational Theory of Mind. University of California Press.   (Cited by 29 | Google)
Horst, Steven (online). The computational theory of mind. Stanford Encyclopedia of Philosophy.   (Cited by 10 | Google)
Horgan, Terence E. (2002). Themes in my philosophical work. In Johannes L. Brandl (ed.), Essays on the Philosophy of Terence Horgan. Atlanta: Rodopi.   (Cited by 1 | Google | More links)
Abstract: I invoked the notion of supervenience in my doctoral disseration, Microreduction and the Mind-Body Problem, completed at the University of Michigan in 1974 under the direction of Jaegwon Kim. I had been struck by the appeal to supervenience in Hare (1952), a classic work in twentieth century metaethics that I studied at Michigan in a course on metaethics taught by William Frankena; and I also had been struck by the brief appeal to supervenience in Davidson (1970). Kim was already, in effect, construing the relation between physical and mental properties as a supervenience relation?although he was not yet using the word ?supervenience?. I assumed that a materialistic metaphysics was correct, and that integral to materialism is the idea that higher-level sciences (including psychology) are reducible to lower-level ones?ultimately to microphysics. One idea I pressed in the dissertation was that biconditional ?bridge laws? would not suffice for genuine intertheoretic reduction if these inter-level laws were additional fundamental laws of nature alongside those of the reducing science; they would be what Herbert Feigl and J.J. C. Smart, in their writings on the psychophysical identity theory, called ?nomological danglers.? I argued that the higher-level property in a bridge law should bear a relation of strict supervenience to its correlated lower-level property, rather than merely being nomically correlated with it. The basic idea was that there are no two physically possible worlds w1 and w2?where a physically possible world is, roughly, a world in which the laws of microphysics obtain and in which there are no nonphysical substances like entelechies or Cartesian souls?such that the actual-world bridge laws obtain in world w1 but not in world w2. (Thus, the bridge laws themselves are fixed relative to the fundamental physical facts and fundamental laws, rather than being fundamental laws themselves alongside those of microphysics.) Already when
Humphreys, Glyn W. & Quinlan, Philip T. (1986). Comments on Peacocke's explanation in computational psychology. Mind and Language 1:355-357.   (Google)
Kuczynski, John-Michael M. (2006). Formal operations and simulated thought. Philosophical Explorations 9 (2):221-234.   (Google | More links)
Abstract: For reasons internal to the concepts of thought and causality, a series of representations must be semantics-driven if that series is to add up to a single, unified thought. Where semantics is not operative, there is at most a series of disjoint representations that add up to nothing true or false, and therefore do not constitute a thought at all. There is necessarily a gulf between simulating thought, on the one hand, and actually thinking, on the other. It doesn't matter how perfect the simulation is; nor does it matter how reliable the causal mechanism involved is. Where semantics is inert, there is no thought. In connection with this, this paper also argues that a popular doctrine - the so-called 'computational theory of mind' (CTM) - is based on a confusion. CTM is the view that thought-processes consist in 'computations', where a computation is defined as a 'form-driven' operation on symbols. The expression 'form-driven operation' is ambiguous, and may refer either to syntax-driven operations or to morphology-driven operations. Syntax-driven operations presuppose the existence of operations that are driven by semantic and extra-semantic knowledge. So CTM is false if the terms 'computation' and 'form-driven operation' are taken to refer to syntax-driven operations. So if CTM is to work, those expressions must be taken to refer to morphology-driven operations. But, as previously stated, an operation must be semantics-driven if it is to qualify as a thought. Thus CTM fails on every disambiguation of the expressions 'formal operation' and 'computation'
Kuczynski, John-Michael M. (2006). Two concepts of "form" and the so-called computational theory of mind. Philosophical Psychology 19 (6):795-821.   (Google | More links)
Abstract: According to the computational theory of mind (CTM), to think is to compute. But what is meant by the word 'compute'? The generally given answer is this: Every case of computing is a case of manipulating symbols, but not vice versa - a manipulation of symbols must be driven exclusively by the formal properties of those symbols if it is qualify as a computation. In this paper, I will present the following argument. Words like 'form' and 'formal' are ambiguous, as they can refer to form in either the syntactic or the morphological sense. CTM fails on each disambiguation, and the arguments for CTM immediately cease to be compelling once we register that ambiguity. The terms 'mechanical' and 'automatic' are comparably ambiguous. Once these ambiguities are exposed, it turns out that there is no possibility of mechanizing thought, even if we confine ourselves to domains (such as first-order sentential logic) where all problems can be settled through decision-procedures. The impossibility of mechanizing thought thus has nothing to do with recherché mathematical theorems, such as those proven by Gödel and Rosser. A related point is that CTM involves, and is guilty of reinforcing, a misunderstanding of the concept of an algorithm
Lih, Ko-wei (1995). Should we care if the brain is a computer? In Mind and Cognition. Taipei: Inst Euro-Amer Stud.   (Google)
Ludwig, Kirk & Schneider, Susan (2008). Fodor's challenge to the classical computational theory of mind. Mind and Language 23 (1):123–143.   (Google | More links)
Abstract: In The Mind Doesn’t Work that Way, Jerry Fodor argues that mental representations have context sensitive features relevant to cognition, and that, therefore, the Classical Computational Theory of Mind (CTM) is mistaken. We call this the Globality Argument. This is an in principle argument against CTM. We argue that it is self-defeating. We consider an alternative argument constructed from materials in the discussion, which avoids the pitfalls of the official argument. We argue that it is also unsound and that, while it is an empirical issue whether context sensitive features of mental representations are relevant to cognition, it is empirically implausible
McDermott, Drew (2001). The digital computer as red Herring. Psycoloquy 12 (54).   (Cited by 1 | Google | More links)
Abstract: Stevan Harnad correctly perceives a deep problem in computationalism, the hypothesis that cognition is computation, namely, that the symbols manipulated by a computational entity do not automatically mean anything. Perhaps, he proposes, transducers and neural nets will not have this problem. His analysis goes wrong from the start, because computationalism is not as rigid a set of theories as he thinks. Transducers and neural nets are just two kinds of computational system, among many, and any solution to the semantic problem that works for them will work for most other computational systems
Mellor, D. H. (1989). How much of the mind is a computer. In Peter Slezak (ed.), Computers, Brains and Minds. Kluwer.   (Cited by 5 | Annotation | Google)
Mellor, D. H. (1984). What is computational psychology? II. Proceedings of the Aristotelian Society 58:37-53.   (Google)
Moor, James H. (2000). Thinking must be computation of the right kind. In The Proceedings of the Twentieth World Congress of Philosophy, Volume 9: Philosophy of Mind. Charlottesville: Philosophy Doc Ctr.   (Cited by 1 | Google)
Nelson, Raymond J. (1987). Machine models for cognitive science. Philosophy of Science 54 (September):391-408.   (Annotation | Google | More links)
Peacocke, Christopher (1986). Reply to Humphreys, Quinlan, Higginbotham, Schiffer and Soames's comments on Peacocke's Explanation in Computational Psychology. Mind and Language 1:388-402.   (Google)
Piccinini, Gualtiero (2003). Computations and Computers in the Sciences of Mind and Brain. Dissertation. Dissertation, University of Pittsburgh   (Cited by 13 | Google | More links)
Abstract: Computationalism says that brains are computing mechanisms, that is, mechanisms that perform computations. At present, there is no consensus on how to formulate computationalism precisely or adjudicate the dispute between computationalism and its foes, or between different versions of computationalism. An important reason for the current impasse is the lack of a satisfactory philosophical account of computing mechanisms. The main goal of this dissertation is to offer such an account.
I also believe that the history of computationalism sheds light on the current debate. By tracing different versions of computationalism to their common historical origin, we can see how the current divisions originated and understand their motivation. Reconstructing debates over computationalism in the context of their own intellectual history can contribute to philosophical progress on the relation between brains and computing mechanisms and help determine how brains and computing mechanisms are alike, and how they differ. Accordingly, my dissertation is divided into a historical part, which traces the early history of computationalism up to 1946, and a philosophical part, which offers an account of computing mechanisms.
The two main ideas developed in this dissertation are that (1) computational states are to be identified functionally not semantically, and (2) computing mechanisms are to be studied by functional analysis. The resulting account of computing mechanism, which I call the functional account of computing mechanisms, can be used to identify computing mechanisms and the functions they compute. I use the functional account of computing mechanisms to taxonomize computing mechanisms based on their different computing power, and I use this taxonomy of computing mechanisms to taxonomize different versions of computationalism based on the functional properties that they ascribe to brains. By doing so, I begin to tease out empirically testable statements about the functional organization of the brain that different versions of computationalism are committed to. I submit that when computationalism is reformulated in the more explicit and precise way I propose, the disputes about computationalism can be adjudicated on the grounds of empirical evidence from neuroscience.
Piccinini, Gualtiero & Scarantino, Andrea, Computation vs. information processing: Why their difference matters to cognitive science.   (Google | More links)
Abstract: Since the cognitive revolution, it’s become commonplace that cognition involves both computation and information processing. Is this one claim or two? Is computation the same as information processing? The two terms are often used interchangeably, but this usage masks important differences. In this paper, we distinguish information processing from computation and examine some of their mutual relations, shedding light on the role each can play in a theory of cognition. We recommend that theoristError: Illegal entry in bfrange block in ToUnicode CMapError: Illegal entry in bfrange block in ToUnicode CMapError: Illegal entry in bfrange block in ToUnicode CMapError: Illegal entry in bfrange block in ToUnicode CMaps of cognition be explicit and careful in choosing 1 notions of computation and information and connecting them together. Much confusion can be avoided by doing so. Keywords: computation, information processing, computationalism, computational theory of mind, cognitivism
Piccinini, Gualtiero (2007). Computational explanation and mechanistic explanation of mind. In Francesco Ferretti, Massimo Marraffa & Mario De Caro (eds.), Cartographies of the Mind: The Interface between Philosophy and Cognitive Science. Springer.   (Cited by 2 | Google | More links)
Abstract: According to the computational theory of mind (CTM), mental capacities are explained by inner computations, which in biological organisms are realized in the brain. Computational explanation is so popular and entrenched that it’s common for scientists and philosophers to assume CTM without argument.
Piccinini, Gualtiero (2004). Functionalism, Computationalism, & Mental States. Studies in the History and Philosophy of Science 35:811-833.   (Google)
Abstract: Some philosophers have conflated functionalism and computationalism. I reconstruct how this came about and uncover two assumptions that made the conflation possible. They are the assumptions that (i) psychological functional analyses are computational descriptions and (ii) everything may be described as performing computations. I argue that, if we want to improve our understanding of both the metaphysics of mental states and the functional relations between them, we should reject these assumptions.
Piccinini, Gualtiero (2004). Functionalism, computationalism, and mental contents. Canadian Journal of Philosophy 34 (3):375-410.   (Cited by 13 | Google | More links)
Piccinini, Gualtiero (2005). Symbols, strings, and spikes. unpublished.   (Cited by 1 | Google)
Abstract: I argue that neural activity, strictly speaking, is not computation. This is because computation, strictly speaking, is the processing of strings of symbols, and neuroscience shows that there are no neural strings of symbols. This has two consequences. On the one hand, the following widely held consequences of computationalism must either be abandoned or supported on grounds independent of computationalism: (i) that in principle we can capture what is functionally relevant to neural processes in terms of some formalism taken from computability theory (such as Turing Machines), (ii) that it is possible to design computer programs that are functionally equivalent to neural processes in the same sense in which it is possible to design computer programs that are functionally equivalent to each other, (iii) that the study of neural (or mental) computation is independent of the study of neural implementation, (iv) that the Church-Turing thesis applies to neural activity in the sense in which it applies to digital computers. On the other hand, we need to gradually reinterpret or replace computational theories in psychology in terms of theoretical constructs that can be realized by known neural processes, such as the spike trains of neuronal ensembles.
Piccinini, Gualtiero (forthcoming). The mind as neural software? Revisiting functionalism, computationalism, and computational functionalism. Philosophy and Phenomenological Research.   (Cited by 2 | Google | More links)
Abstract: Defending or attacking either functionalism or computationalism requires clarity on what they amount to and what evidence counts for or against them. My goal here is not to evaluate their plausibility. My goal is to formulate them and their relationship clearly enough that we can determine which type of evidence is relevant to them. I aim to dispel some sources of confusion that surround functionalism and computationalism, recruit recent philosophical work on mechanisms and computation to shed light on them, and clarify how functionalism and computationalism may or may not legitimately come together.

Piccinini, Gualtiero, The resilience of computationalism.   (Google | More links)
Abstract: Roughly speaking, computationalism says that cognition is computation, or that cognitive phenomena are explained by the agent‘s computations. The cognitive processes and behavior of agents are the explanandum. The computations performed by the agents‘ cognitive systems are the proposed explanans. Since the cognitive systems of biological organisms are their nervous 1 systems (plus or minus a bit), we may say that according to computationalism, the cognitive processes and behavior of organisms are explained by neural computations. Some people might prefer to say that cognitive systems are ―realized‖ by nervous systems, and thus that—according to computationalism—cognitive computations are ―realized‖ by neural processes. In this paper, nothing hinges on the nature of the relation between cognitive systems and nervous systems, or between computations and neural processes. For present purposes, if a neural process realizes a computation, then that neural process is a computation. Thus, I will couch much of my discussion in terms of nervous systems and neural computation.1 Before proceeding, we should dispense with a possible red herring. Contrary to a common assumption, computationalism does not stand in opposition to connectionism. Connectionism, in the most general and common sense of the term, is the claim that cognitive phenomena are explained (at some level and at least in part) by the processes of neural networks. This is a truism, supported by most neuroscientific evidence. Everybody ought to be a connectionist in this general sense. The relevant question is, are neural processes computations? More precisely, are the neural processes to be found in the nervous systems of organisms computations? Computationalists say ―yes‖, anti-computationalists say ―no‖. This paper investigates whether any of the arguments on offer against computationalism have a chance at knocking it off.2 Ever since Warren McCulloch and Walter Pitts (1943) first proposed it, computationalism has been subjected to a wide range of objections..
Pietroski, Paul M. (1996). Experiencing the facts: Critical notice of Mind and World, by John McDowell. Canadian Journal of Philosophy 26:613-36.   (Google)
Abstract: Paul Pietroski, McGill University The general topic of_ Mind and World_, the written version of John McDowell's 1991 John Locke Lectures, is how `concepts mediate the relation between minds and the world'. And one of the main aims is `to suggest that Kant should still have a central place in our discussion of the way thought bears on reality' (1).1 In particular, McDowell urges us to adopt a thesis that he finds in Kant, or perhaps in Strawson's Kant: the content of experience is conceptualized; _what_ we experience is always the kind of thing that we could also believe. When an agent has a veridical experience, she `takes in, for instance sees, _that things are thus and so_' (9). McDowell's argument for this thesis is indirect, but potentially powerful. He discusses a tension concerning the roles of experience and conceptual capacities in thought, and he claims that the only adequate resolution involves granting that experiences have conceptualized content. The tension, elaborated below, can be expressed roughly as follows: judgments must be somehow constrained by features of the external environment, else judgments would be utterly divorced from the world they purport to be about; yet our judgments must be somehow free of external control, else we could give no sense to the idea that we are responsible for our judgments
Pollock, John L. (1989). How to Build a Person: A Prolegomenon. MIT Press.   (Cited by 40 | Google)
Abstract: Pollock describes an exciting theory of rationality and its partial implementation in OSCAR, a computer system whose descendants will literally be persons.
Pylyshyn, Zenon W. (1980). Computation and cognition: Issues in the foundation of cognitive science. Behavioral and Brain Sciences 3:111-32.   (Cited by 111 | Google)
Pylyshyn, Zenon W. (1984). Computation and Cognition. MIT Press.   (Cited by 1190 | Annotation | Google | More links)
Pylyshyn, Zenon W. (1989). Computing and cognitive science. In Michael I. Posner (ed.), Foundations of Cognitive Science. MIT Press.   (Cited by 61 | Annotation | Google | More links)
Abstract: influence. One of the principal characteristics that distinguishes Cognitive Science from more traditional studies of cognition within Psychology, is the extent to which it has been influenced by both the ideas and the techniques of computing. It may come as a surprise to the outsider, then, to discover that there is no unanimity within the discipline on either (a) the nature (and in some cases the desireabilty) of the influence and (b) what computing is –- or at least on its
Pylyshyn, Zenon W. (1978). Computational models and empirical constraints. Behavioral and Brain Sciences 1:98-128.   (Cited by 15 | Google)
Pylyshyn, Zenon W. (ed.) (1986). Meaning And Cognitive Structure: Issues In The Computational Theory Of Mind. Norwood: Ablex.   (Cited by 10 | Google)
Quilici Gonzalez, Maria Eunice (2005). Information and mechanical models of intelligence: What can we learn from cognitive science? Pragmatics and Cognition 13 (3):565-582.   (Google)
Rapaport, William J. (1998). How minds can be computational systems. Journal of Experimental and Theoretical Artificial Intelligence 10 (4):403-419.   (Cited by 18 | Google | More links)
Abstract: The proper treatment of computationalism, as the thesis that cognition is computable, is presented and defended. Some arguments of James H. Fetzer against computationalism are examined and found wanting, and his positive theory of minds as semiotic systems is shown to be consistent with computationalism. An objection is raised to an argument of Selmer Bringsjord against one strand of computationalism, namely, that Turing-Test± passing artifacts are persons, it is argued that, whether or not this objection holds, such artifacts will inevitably be persons
Rellihan, Matthew J. (2009). Fodor's Riddle of abduction. Philosophical Studies 144 (2).   (Google)
Abstract: How can abductive reasoning be physical, feasible, and reliable? This is Fodor’s riddle of abduction, and its apparent intractability is the cause of Fodor’s recent pessimism regarding the prospects for cognitive science. I argue that this riddle can be solved if we augment the computational theory of mind to allow for non-computational mental processes, such as those posited by classical associationists and contemporary connectionists. The resulting hybrid theory appeals to computational mechanisms to explain the semantic coherence of inference and associative mechanisms to explain the efficient retrieval of relevant information from memory. The interaction of these mechanisms explains how abduction can be physical, feasible, and reliable
Rey, Georges (2003). Why Wittgenstein ought to have been a computationalist (and what a computationalist can gain from Wittgenstein). Croatian Journal of Philosophy 3 (9):231-264.   (Cited by 3 | Google)
Rosenberg, A. & Mackintosh, N. J. (1973). On Fodor's distinction between strong and weak equivalence in machine simulation. Philosophy of Science 40 (March):118-120.   (Cited by 1 | Google | More links)
Rosenberg, A. & Mackintosh, N. J. (1974). Strong, weak and functional equivalence in machine simulation. Philosophy of Science 41 (December):412-414.   (Google | More links)
Schiffer, Stephen R. (1986). Comments on Peacocke's explanation in computational psychology. Mind and Language 1:362-371.   (Google)
Scheutz, Matthias (2002). Computationalism: The next generation. In Matthias Scheutz (ed.), Computationalism: New Directions. MIT Press.   (Cited by 3 | Google | More links)
Scheutz, Matthias (2001). Computational vs. causal complexity. Minds And Machines 11 (4):543-566.   (Cited by 5 | Google | More links)
Abstract:   The main claim of this paper is that notions of implementation based on an isomorphic correspondence between physical and computational states are not tenable. Rather, ``implementation'' has to be based on the notion of ``bisimulation'' in order to be able to block unwanted implementation results and incorporate intuitions from computational practice. A formal definition of implementation is suggested, which satisfies theoretical and practical requirements and may also be used to make the functionalist notion of ``physical realization'' precise. The upshot of this new definition of implementation is that implementation cannot distinguish isomorphic bisimilar from non-isomporphic bisimilar systems anymore, thus driving a wedge between the notions of causal and computational complexity. While computationalism does not seem to be affected by this result, the consequences for functionalism are not clear and need further investigations
Schneider, Susan (2009). Lot, ctm, and the elephant in the room. Synthese 170 (2):235-250.   (Google | More links)
Abstract: According to the language of thought (LOT) approach and the related computational theory of mind (CTM), thinking is the processing of symbols in an inner mental language that is distinct from any public language. Herein, I explore a deep problem at the heart of the LOT/CTM program—it has yet to provide a plausible conception of a mental symbol
Schneider, Susan (2009). Mindscan: Transcending and enhancing the human brain. In Susan Schneider (ed.), Science Fiction and Philosophy.   (Google | More links)
Abstract: Suppose it is 2025 and being a technophile, you purchase brain enhancements as they become readily available. First, you add a mobile internet connection to your retina, then, you enhance your working memory by adding neural circuitry. You are now officially a cyborg. Now skip ahead to 2040. Through nanotechnological therapies and enhancements you are able to extend your lifespan, and as the years progress, you continue to accumulate more far-reaching enhancements. By 2060, after several small but cumulatively profound alterations, you are a “posthuman.” To quote philosopher Nick Bostrom, posthumans are possible future beings, “whose basic capacities so radically exceed those of present humans as to be no longer unambiguously human by our current standards” (Bostrom 2003c). At this point, your intelligence is enhanced not just in terms of speed of mental processing; you are now able to make rich connections that you were not able to make before. Unenhanced humans, or “naturals,” seem to you to be intellectually disabled—you have little in common with them—but as a transhumanist, you are supportive of their right to not enhance (Bostrom 2003c; Garreau 2005; Kurzweil 2005)
Scheutz, Matthias (2002). New computationalism. Conceptus Studien 14.   (Cited by 12 | Google | More links)
Scheutz, Matthias & Peschl, Markus F. (2001). Some thoughts on computation and simulation in cognitive science. In Proceedings of the Sixth Congress of the Austrian Philosophical Society.   (Google)
Scheutz, Matthias (2000). The cognitive computational story. Conceptus Studien 14:136-152.   (Google)
Schneider, Susan (2009). The language of thought. In John Symons & Paco Calvo (eds.), Routledge Companion to Philosophy of Psychology. Routledge.   (Google)
Abstract: According to the language of thought (or
Schneider, Susan (forthcoming). The nature of primitive symbols in the language of thought. Mind and Language.   (Google | More links)
Abstract: This paper provides a theory of the nature of symbols in the language of thought (LOT). My discussion consists in three parts. In part one, I provide three arguments for the individuation of primitive symbols in terms of total computational role. The first of these arguments claims that Classicism requires that primitive symbols be typed in this manner; no other theory of typing will suffice. The second argument contends that without this manner of symbol individuation, there will be computational processes that fail to supervene on syntax, together with the rules of composition and the computational algorithms. The third argument says that cognitive science needs a natural kind that is typed by total computational role. Otherwise, either cognitive science will be incomplete, or its laws will have counterexamples. Then, part two defends this view from a criticism, offered by both Jerry Fodor and Jesse Prinz, who respond to my view with the charge that because the types themselves are individuated
Schneider, Susan, Yes, it does: A diatribe on Jerry Fodor's the mind doesn't work that way.   (Google)
Abstract: The Mind Doesn’t Work That Way is an expose of certain theoretical problems in cognitive science, and in particular, problems that concern the Classical Computational Theory of Mind (CTM). The problems that Fodor worries plague CTM divide into two kinds, and both purport to show that the success of cognitive science will likely be limited to the modules. The first sort of problem concerns what Fodor has called “global properties”; features that a mental sentence has which depend on how the sentence interacts with a larger plan (i.e., set of sentences), rather than the type identity of the sentence alone. The second problem concerns what many have called, “The Relevance Problem”: the problem of whether and how humans determine what is relevant in a computational manner. However, I argue that the problem that Fodor believes global properties pose for CTM is a non-problem, and that further, while the relevance problem is a serious research issue, it does not justify the grim view that cognitive science, and CTM in particular, will likely fail to explain cognition
Scott, Dana S. (1990). The computational conception of mind in acting and reflecting: The interdisciplinary turn. In Philosophy. Norwell: Kluwer.   (Google)
Searle, John R. (1990). Is the brain a digital computer? Proceedings and Addresses of the American Philosophical Association 64 (November):21-37.   (Cited by 86 | Annotation | Google | More links)
Abstract: There are different ways to present a Presidential Address to the APA; the one I have chosen is simply to report on work that I am doing right now, on work in progress. I am going to present some of my further explorations into the computational model of the mind.\**
Shapiro, Stuart C. (1995). Computationalism. Minds and Machines 5 (4):467-87.   (Cited by 11 | Google | More links)
Abstract:   Computationalism, the notion that cognition is computation, is a working hypothesis of many AI researchers and Cognitive Scientists. Although it has not been proved, neither has it been disproved. In this paper, I give some refutations to some well-known alleged refutations of computationalism. My arguments have two themes: people are more limited than is often recognized in these debates; computer systems are more complicated than is often recognized in these debates. To underline the latter point, I sketch the design and abilities of a possible embodied computer system
Shapiro, Stuart C. & Rapaport, William J. (1991). Models and minds. In Robert E. Cummins & John L. Pollock (eds.), Philosophy and AI. Cambridge: MIT Press.   (Cited by 41 | Google)
Shagrir, Oron (2006). Why we view the brain as a computer. Synthese 153 (3):393-416.   (Cited by 6 | Google | More links)
Abstract: The view that the brain is a sort of computer has functioned as a theoretical guideline both in cognitive science and, more recently, in neuroscience. But since we can view every physical system as a computer, it has been less than clear what this view amounts to. By considering in some detail a seminal study in computational neuroscience, I first suggest that neuroscientists invoke the computational outlook to explain regularities that are formulated in terms of the information content of electrical signals. I then indicate why computational theories have explanatory force with respect to these regularities:in a nutshell, they underscore correspondence relations between formal/mathematical properties of the electrical signals and formal/mathematical properties of the represented objects. I finally link my proposal to the philosophical thesis that content plays an essential role in computational taxonomy
Sloman, Aaron (2002). Architecture-based conceptions of mind. In Peter Gardenfors, Katarzyna Kijania-Placek & Jan Wolenski (eds.), In the Scope of Logic, Methodology, and Philosophy of Science (Vol II). Kluwer.   (Cited by 27 | Google | More links)
Sloman, Aaron (1996). What sort of architecture is required for a human-like agent? In Ramakrishna K. Rao (ed.), Foundations of Rational Agency. Kluwer Academic Publishers.   (Cited by 95 | Google | More links)
Abstract: This paper is about how to give human-like powers to complete agents. For this the most important design choice concerns the overall architecture. Questions regarding detailed mechanisms, forms of representations, inference capabilities, knowledge etc. are best addressed in the context of a global architecture in which different design decisions need to be linked. Such a design would assemble various kinds of functionality into a complete coherent working system, in which there are many concurrent, partly independent, partly mutually supportive, partly potentially incompatible processes, addressing a multitude of issues on different time scales, including asynchronous, concurrent, motive generators. Designing human like agents is part of the more general problem of understanding design space, niche space and their interrelations, for, in the abstract, there is no one optimal design, as biological diversity on earth shows
Soames, Scott (1986). Comments on Peacocke's explanation in computational psychology. Mind and Language 1:372-387.   (Google)
Sterelny, Kim (1989). Computational functional psychology: Problems and prospects. In Peter Slezak (ed.), Computers, Brains and Minds. Kluwer.   (Cited by 4 | Annotation | Google)
Sutherland, N. S. (1974). Computer simulation of brain function. In Philosophy Of Psychology. Macmillan.   (Cited by 2 | Google)
Szymanik, Jakub (2009). Quantifiers in TIME and SPACE. Computational Complexity of Generalized Quantifiers in Natural Language. Dissertation, University of Amsterdam   (Google)
Abstract: In the dissertation we study the complexity of generalized quantifiers in natural language. Our perspective is interdisciplinary: we combine philosophical insights with theoretical computer science, experimental cognitive science and linguistic theories. In Chapter 1 we argue for identifying a part of meaning, the so-called referential meaning (model-checking), with algorithms. Moreover, we discuss the influence of computational complexity theory on cognitive tasks. We give some arguments to treat as cognitively tractable only those problems which can be computed in polynomial time. Additionally, we suggest that plausible semantic theories of the everyday fragment of natural language can be formulated in the existential fragment of second-order logic. In Chapter 2 we give an overview of the basic notions of generalized quantifier theory, computability theory, and descriptive complexity theory. In Chapter 3 we prove that PTIME quantifiers are closed under iteration, cumulation and resumption. Next, we discuss the NP-completeness of branching quantifiers. Finally, we show that some Ramsey quantifiers define NP-complete classes of finite models while others stay in PTIME. We also give a sufficient condition for a Ramsey quantifier to be computable in polynomial time. In Chapter 4 we investigate the computational complexity of polyadic lifts expressing various readings of reciprocal sentences with quantified antecedents. We show a dichotomy between these readings: the strong reciprocal reading can create NP-complete constructions, while the weak and the intermediate reciprocal readings do not. Additionally, we argue that this difference should be acknowledged in the Strong Meaning hypothesis. In Chapter 5 we study the definability and complexity of the type-shifting approach to collective quantification in natural language. We show that under reasonable complexity assumptions it is not general enough to cover the semantics of all collective quantifiers in natural language. The type-shifting approach cannot lead outside second-order logic and arguably some collective quantifiers are not expressible in second-order logic. As a result, we argue that algebraic (many-sorted) formalisms dealing with collectivity are more plausible than the type-shifting approach. Moreover, we suggest that some collective quantifiers might not be realized in everyday language due to their high computational complexity. Additionally, we introduce the so-called second-order generalized quantifiers to the study of collective semantics. In Chapter 6 we study the statement known as Hintikka's thesis: that the semantics of sentences like ``Most boys and most girls hate each other'' is not expressible by linear formulae and one needs to use branching quantification. We discuss possible readings of such sentences and come to the conclusion that they are expressible by linear formulae, as opposed to what Hintikka states. Next, we propose empirical evidence confirming our theoretical predictions that these sentences are sometimes interpreted by people as having the conjunctional reading. In Chapter 7 we discuss a computational semantics for monadic quantifiers in natural language. We recall that it can be expressed in terms of finite-state and push-down automata. Then we present and criticize the neurological research building on this model. The discussion leads to a new experimental set-up which provides empirical evidence confirming the complexity predictions of the computational model. We show that the differences in reaction time needed for comprehension of sentences with monadic quantifiers are consistent with the complexity differences predicted by the model. In Chapter 8 we discuss some general open questions and possible directions for future research, e.g., using different measures of complexity, involving game-theory and so on. In general, our research explores, from different perspectives, the advantages of identifying meaning with algorithms and applying computational complexity analysis to semantic issues. It shows the fruitfulness of such an abstract computational approach for linguistics and cognitive science.
Tibbetts, Paul E. (1996). Residual dualism in computational theories of mind. Dialectica 50 (1):37-52.   (Google | More links)
van Gelder, Tim (1998). Computers and computation in cognitive science. In T.M. Michalewicz (ed.), Advances in Computational Life Sciences Vol.2: Humans to Proteins. Melbourne: CSIRO Publishing.   (Cited by 2 | Google)
Abstract: Digital computers play a special role in cognitive science—they may actually be instances of the phenomenon they are being used to model. This paper surveys some of the main issues involved in understanding the relationship between digital computers and cognition. It sketches the role of digital computers within orthodox computational cognitive science, in the light of a recently emerging alternative approach based around dynamical systems
Wagman, Morton (1991). Cognitive Science and Concepts of Mind Toward a General Theory of Human and Artificial Intelligence. New York: Praeger.   (Cited by 7 | Google)
Wing, Jeannette M. (2006). Computational thinking. Communications of the ACM 49 (3):33-35.   (Cited by 12 | Google | More links)
Abstract: Computational thinking cisely. Stating the difficulty of a problem accounts builds on the power and for the underlying power of the machine—the com- limits of computing puting device that will run the solution. We must processes, whether they are exe- consider the machine’s instruction set, its resource cuted by a human or by a constraints, and its operating environment. machine. Computational In solving a problem efficiently,, we might further methods and models give us ask whether an approximate solution is good the courage to solve prob- enough, whether we can use randomization to our lems and design systems that no one of us would advantage, and whether false positives or false nega- be capable of tackling alone. Computational think- tives are allowed. Computational thinking is refor- ing confronts the riddle of machine intelligence: mulating a seemingly difficult problem into one we What can humans do better than computers? and know how to solve, perhaps by reduction, embed- What can computers do better than humans? Most ding, transformation, or simulation. fundamentally it addresses the question: What is Computational thinking is thinking recursively. It computable? Today, we know only parts of the is parallel processing. It is interpreting code as data answers to such questions. and data as code. It is type checking as the general- Computational thinking is a fundamental skill for ization of dimensional analysis. It is recognizing everyone, not just for computer scientists. To read- both the virtues and the dangers of aliasing, or giv- ing, writing, and arithmetic, we should add compu- ing someone or something more than one name. It tational thinking to every child’s analytical ability. is recognizing both the cost and power of indirect Just as the printing press facilitated the spread of the addressing and procedure call. It is judging a pro- three Rs, what is appropriately incestuous about this gram not just for correctness and efficiency but for vision is that computing and computers facilitate the aesthetics, and a system’s design for simplicity and spread of computational thinking.
Wrathall, Mark & Kelly, Sean (1996). Existential phenomenology and cognitive science. Electronic Journal of Analytic Philosophy (4).   (Google | More links)
Abstract: [1] In _What Computers Can't Do_ (1972), Hubert Dreyfus identified several basic assumptions about the nature of human knowledge which grounded contemporary research in cognitive science. Contemporary artificial intelligence, he argued, relied on an unjustified belief that the mind functions like a digital computer using symbolic manipulations ("the psychological assumption") (Dreyfus 1992: 163ff), or at least that computer programs could be understood as formalizing human thought ("the epistemological assumption") (Dreyfus 1992: 189). In addition, the project depended upon an assumption about the data about the human world which we employ in thought - namely, that it consists of discrete, determinate, and explicit pieces which can be processed heuristically ("the ontological assumption") (Dreyfus 1992: 206)
Zaitchik, Alan (1980). Intentionalism and computational psychology. Grazer Philosophische Studien 10:149-166.   (Google)
Zaitchik, Alan (1981). Intentionalism and physical reductionism in computational psychology. Philosophy and Phenomenological Research 42 (September):23-41.   (Google | More links)