Javascript Menu by Deluxe-Menu.com
MindPapers is now part of PhilPapers: online research in philosophy, a new service with many more features.
 
 Compiled by David Chalmers (Editor) & David Bourget (Assistant Editor), Australian National University. Submit an entry.
 
   
click here for help on how to search

6.2. Computation and Representation (Computation and Representation on PhilPapers)

See also:

6.2a Symbols and Symbol Systems

Boyle, C. Franklin (2001). Transduction and degree of grounding. Psycoloquy 12 (36).   (Cited by 2 | Google | More links)
Abstract: While I agree in general with Stevan Harnad's symbol grounding proposal, I do not believe "transduction" (or "analog process") PER SE is useful in distinguishing between what might best be described as different "degrees" of grounding and, hence, for determining whether a particular system might be capable of cognition. By 'degrees of grounding' I mean whether the effects of grounding go "all the way through" or not. Why is transduction limited in this regard? Because transduction is a physical process which does not speak to the issue of representation, and, therefore, does not explain HOW the informational aspects of signals impinging on sensory surfaces become embodied as symbols or HOW those symbols subsequently cause behavior, both of which, I believe, are important to grounding and to a system's cognitive capacity. Immunity to Searle's Chinese Room (CR) argument does not ensure that a particular system is cognitive, and whether or not a particular degree of groundedness enables a system to pass the Total Turing Test (TTT) may never be determined
Bringsjord, Selmer (online). People are infinitary symbol systems: No sensorimotor capacity necessary.   (Cited by 2 | Google | More links)
Abstract: Stevan Harnad and I seem to be thinking about many of the same issues. Sometimes we agree, sometimes we don't; but I always find his reasoning refreshing, his positions sensible, and the problems with which he's concerned to be of central importance to cognitive science. His "Grounding Symbols in the Analog World with Neural Nets" (= GS) is no exception. And GS not only exemplifies Harnad's virtues, it also provides a springboard for diving into Harnad- Bringsjord terrain
Clark, Andy (2006). Material symbols. Philosophical Psychology 19 (3):291-307.   (Cited by 4 | Google | More links)
Abstract: What is the relation between the material, conventional symbol structures that we encounter in the spoken and written word, and human thought? A common assumption, that structures a wide variety of otherwise competing views, is that the way in which these material, conventional symbol-structures do their work is by being translated into some kind of content-matching inner code. One alternative to this view is the tempting but thoroughly elusive idea that we somehow think in some natural language (such as English). In the present treatment I explore a third option, which I shall call the "complementarity" view of language. According to this third view the actual symbol structures of a given language add cognitive value by complementing (without being replicated by) the more basic modes of operation and representation endemic to the biological brain. The "cognitive bonus" that language brings is, on this model, not to be cashed out either via the ultimately mysterious notion of "thinking in a given natural language" or via some process of exhaustive translation into another inner code. Instead, we should try to think in terms of a kind of coordination dynamics in which the forms and structures of a language qua material symbol system play a key and irreducible role. Understanding language as a complementary cognitive resource is, I argue, an important part of the much larger project (sometimes glossed in terms of the "extended mind") of understanding human cognition as essentially and multiply hybrid: as involving a complex interplay between internal biological resources and external non-biological resources
Cummins, Robert E. (1996). Why there is no symbol grounding problem? In Representations, Targets, and Attitudes. MIT Press.   (Google)
Harnad, Stevan (1992). Connecting object to symbol in modeling cognition. In A. Clark & Ronald Lutz (eds.), Connectionism in Context. Springer-Verlag.   (Cited by 61 | Annotation | Google | More links)
Abstract: Connectionism and computationalism are currently vying for hegemony in cognitive modeling. At first glance the opposition seems incoherent, because connectionism is itself computational, but the form of computationalism that has been the prime candidate for encoding the "language of thought" has been symbolic computationalism (Dietrich 1990, Fodor 1975, Harnad 1990c; Newell 1980; Pylyshyn 1984), whereas connectionism is nonsymbolic (Fodor & Pylyshyn 1988, or, as some have hopefully dubbed it, "subsymbolic" Smolensky 1988). This paper will examine what is and is not a symbol system. A hybrid nonsymbolic/symbolic system will be sketched in which the meanings of the symbols are grounded bottom-up in the system's capacity to discriminate and identify the objects they refer to. Neural nets are one possible mechanism for learning the invariants in the analog sensory projection on which successful categorization is based. "Categorical perception" (Harnad 1987a), in which similarity space is "warped" in the service of categorization, turns out to be exhibited by both people and nets, and may mediate the constraints exerted by the analog world of objects on the formal world of symbols
Harnad, Stevan (2002). Symbol grounding and the origin of language. In Matthias Scheutz (ed.), Computationalism: New Directions. MIT Press.   (Cited by 12 | Google | More links)
Abstract: What language allows us to do is to "steal" categories quickly and effortlessly through hearsay instead of having to earn them the hard way, through risky and time-consuming sensorimotor "toil" (trial-and-error learning, guided by corrective feedback from the consequences of miscategorisation). To make such linguistic "theft" possible, however, some, at least, of the denoting symbols of language must first be grounded in categories that have been earned through sensorimotor toil (or else in categories that have already been "prepared" for us through Darwinian theft by the genes of our ancestors); it cannot be linguistic theft all the way down. The symbols that denote categories must be grounded in the capacity to sort, label and interact with the proximal sensorimotor projections of their distal category-members in a way that coheres systematically with their semantic interpretations, both for individual symbols, and for symbols strung together to express truth-value-bearing propositions
Harnad, Stevan (ms). Symbol grounding is an empirical problem: Neural nets are just a candidate component.   (Cited by 27 | Google | More links)
Abstract: "Symbol Grounding" is beginning to mean too many things to too many people. My own construal has always been simple: Cognition cannot be just computation, because computation is just the systematically interpretable manipulation of meaningless symbols, whereas the meanings of my thoughts don't depend on their interpretability or interpretation by someone else. On pain of infinite regress, then, symbol meanings must be grounded in something other than just their interpretability if they are to be candidates for what is going on in our heads. Neural nets may be one way to ground the names of concrete objects and events in the capacity to categorize them (by learning the invariants in their sensorimotor projections). These grounded elementary symbols could then be combined into symbol strings expressing propositions about more abstract categories. Grounding does not equal meaning, however, and does not solve any philosophical problems
Harnad, Stevan (1990). The symbol grounding problem. [Journal (Paginated)] 42:335-346.   (Cited by 1265 | Annotation | Google | More links)
Abstract: There has been much discussion recently about the scope and limits of purely symbolic models of the mind and about the proper role of connectionism in cognitive modeling. This paper describes the symbol grounding problem: How can the semantic interpretation of a formal symbol system be made intrinsic to the system, rather than just parasitic on the meanings in our heads? How can the meanings of the meaningless symbol tokens, manipulated solely on the basis of their (arbitrary) shapes, be grounded in anything but other meaningless symbols? The problem is analogous to trying to learn Chinese from a Chinese/Chinese dictionary alone. A candidate solution is sketched: Symbolic representations must be grounded bottom-up in nonsymbolic representations of two kinds: (1) iconic representations, which are analogs of the proximal sensory projections of distal objects and events, and (2) categorical representations, which are learned and innate feature-detectors that pick out the invariant features of object and event categories from their sensory projections. Elementary symbols are the names of these object and event categories, assigned on the basis of their (nonsymbolic) categorical representations. Higher-order (3) symbolic representations, grounded in these elementary symbols, consist of symbol strings describing category membership relations (e.g., An X is a Y that is Z). Connectionism is one natural candidate for the mechanism that learns the invariant features underlying categorical representations, thereby connecting names to the proximal projections of the distal objects they stand for. In this way connectionism can be seen as a complementary component in a hybrid nonsymbolic/symbolic model of the mind, rather than a rival to purely symbolic modeling. Such a hybrid model would not have an autonomous symbolic module, however; the symbolic functions would emerge as an intrinsically dedicated symbol system as a consequence of the bottom-up grounding of categories' names in their sensory representations. Symbol manipulation would be governed not just by the arbitrary shapes of the symbol tokens, but by the nonarbitrary shapes of the icons and category invariants in which they are grounded
Kosslyn, Stephen M. & Hatfield, Gary (1984). Representation without symbol systems. Social Research 51:1019-1045.   (Cited by 15 | Google)
Lumsden, David (2005). How can a symbol system come into being? Dialogue 44 (1):87-96.   (Google)
Abstract: One holistic thesis about symbols is that a symbol cannot exist singly, but only as apart of a symbol system. There is also the plausible view that symbol systems emerge gradually in an individual, in a group, and in a species. The problem is that symbol holism makes it hard to see how a symbol system can emerge gradually, at least if we are considering the emergence of a first symbol system. The only way it seems possible is if being a symbol can be a matter of degree, which is initially problematic. This article explains how being a cognitive symbol can be a matter of degree after all. The contrary intuition arises from the way a process of interpretation forces an all-or-nothing character on symbols, leaving room for underlying material to realize symbols to different degrees in a way that Daniel Dennett’s work can help illuminate. Holism applies to symbols as interpreted, while gradualism applies to how the underlying material realizes symbols.Selon une thèse holistique sur les symboles, un symbole ne peut exister isolément mais doit faire partie d’un systéme symbolique. Une opinion, elle aussi plausible, veut que les systèmes symboliques émergent graduellement chez un individu, un groupe ou une espèce. Le problème c’est qu’on voit mal, si le holisme des systèmes symboliques tient, comment un système symbolique peut émerger graduellement, du moins pour la première fois. Ce n’est possible, semble-t-il, que si être un symbole est affaire de degré, thèse au départ problématique. Cet article explique comment être un symbole cognitif peut après tout être affaire de degré. L’intuition contraire vient de ce que le processus d’interprétation nous force au tout ou rien, ce qui laisse un jeu dans la façon dont le matériel sous-jacent réalise les symboles à des degrés divers. Les travaux de Daniel Dennett sont à cet égard éclairants. Le holisme vaut pour les symboles tels qu’ils sont interprétés, tandis que le gradualisme vaut pour la façon dont le matériel sous-jacent réalise les symboles
MacDorman, Karl F. (1997). How to ground symbols adaptively. In S. O'Nuillain, Paul McKevitt & E. MacAogain (eds.), Two Sciences of Mind. John Benjamins.   (Cited by 1 | Google)
Newell, Allen & Simon, Herbert A. (1981). Computer science as empirical inquiry: Symbols and search. Communications of the Association for Computing Machinery 19:113-26.   (Cited by 758 | Annotation | Google | More links)
Newell, Allen (1980). Physical symbol systems. Cognitive Science 4:135-83.   (Cited by 469 | Google | More links)
Pinker, Steven (2004). Why nature & nurture won't go away. Daedalus.   (Cited by 7 | Google | More links)
Robinson, William S. (1995). Brain symbols and computationalist explanation. Minds and Machines 5 (1):25-44.   (Cited by 4 | Google | More links)
Abstract:   Computationalist theories of mind require brain symbols, that is, neural events that represent kinds or instances of kinds. Standard models of computation require multiple inscriptions of symbols with the same representational content. The satisfaction of two conditions makes it easy to see how this requirement is met in computers, but we have no reason to think that these conditions are satisfied in the brain. Thus, if we wish to give computationalist explanations of human cognition, without committing ourselvesa priori to a strong and unsupported claim in neuroscience, we must first either explain how we can provide multiple brain symbols with the same content, or explain how we can abandon standard models of computation. It is argued that both of these alternatives require us to explain the execution of complex tasks that have a cognition-like structure. Circularity or regress are thus threatened, unless noncomputationalist principles can provide the required explanations. But in the latter case, we do not know that noncomputationalist principles might not bear most of the weight of explaining cognition. Four possible types of computationalist theory are discussed; none appears to provide a promising solution to the problem. Thus, despite known difficulties in noncomputationalist investigations, we have every reason to pursue the search for noncomputationalist principles in cognitive theory
Roitblat, Herbert L. (2001). Computational grounding. Psycoloquy 12 (58).   (Cited by 1 | Google | More links)
Abstract: Harnad defines computation to mean the manipulation of physical symbol tokens on the basis of syntactic rules defined over the shapes of the symbols, independent of what, if anything, those symbols represent. He is, of course, free to define terms in any way that he chooses, and he is very clear about what he means by computation, but I am uncomfortable with this definition. It excludes, at least at a functional level of description, much of what a computer is actually used for, and much of what the brain/mind does. When I toss a Frisbee to the neighbor's dog, the dog does not, I think, engage in a symbolic soliloquy about the trajectory of the disc, the wind's effects on it, and formulas for including lift and the acceleration due to gravity. There are symbolic formulas for each of these relations, but the dog insofar as I can tell, does not use any of these formulas. Nevertheless, it computes these factors in order to intercept the disc in the air. I argue that determining the solution to a differential equation is at least as much computation as is processing symbols. The disagreement is over what counts as computation, I think that Harnad and I both agree that the dog solves the trajectory problem implicitly. This definition is important, because, although Harnad offers a technical definition for what he means by computation, the folk- definition of the term is probably interpreted differently, and I believe this leads to trouble
Schneider, Susan (2009). Lot, ctm, and the elephant in the room. Synthese 170 (2):235-250.   (Google | More links)
Abstract: According to the language of thought (LOT) approach and the related computational theory of mind (CTM), thinking is the processing of symbols in an inner mental language that is distinct from any public language. Herein, I explore a deep problem at the heart of the LOT/CTM program—it has yet to provide a plausible conception of a mental symbol
Schneider, Susan (forthcoming). The nature of primitive symbols in the language of thought. Mind and Language.   (Google | More links)
Abstract: This paper provides a theory of the nature of symbols in the language of thought (LOT). My discussion consists in three parts. In part one, I provide three arguments for the individuation of primitive symbols in terms of total computational role. The first of these arguments claims that Classicism requires that primitive symbols be typed in this manner; no other theory of typing will suffice. The second argument contends that without this manner of symbol individuation, there will be computational processes that fail to supervene on syntax, together with the rules of composition and the computational algorithms. The third argument says that cognitive science needs a natural kind that is typed by total computational role. Otherwise, either cognitive science will be incomplete, or its laws will have counterexamples. Then, part two defends this view from a criticism, offered by both Jerry Fodor and Jesse Prinz, who respond to my view with the charge that because the types themselves are individuated
Sun, Ron (2000). Symbol grounding: A new look at an old idea. Philosophical Psychology 13 (2):149-172.   (Cited by 39 | Google | More links)
Abstract: Symbols should be grounded, as has been argued before. But we insist that they should be grounded not only in subsymbolic activities, but also in the interaction between the agent and the world. The point is that concepts are not formed in isolation (from the world), in abstraction, or "objectively." They are formed in relation to the experience of agents, through their perceptual/motor apparatuses, in their world and linked to their goals and actions. This paper takes a detailed look at this relatively old issue, with a new perspective, aided by our work of computational cognitive model development. To further our understanding, we also go back in time to link up with earlier philosophical theories related to this issue. The result is an account that extends from computational mechanisms to philosophical abstractions
Taddeo, Mariarosaria & Floridi, Luciano (2008). A praxical solution of the symbol grounding problem. Minds and Machines.   (Google | More links)
Abstract: This article is the second step in our research into the Symbol Grounding Problem (SGP). In a previous work, we defined the main condition that must be satisfied by any strategy in order to provide a valid solution to the SGP, namely the zero semantic commitment condition (Z condition). We then showed that all the main strategies proposed so far fail to satisfy the Z condition, although they provide several important lessons to be followed by any new proposal. Here, we develop a new solution of the SGP. It is called praxical in order to stress the key role played by the interactions between the agents and their environment. It is based on a new theory of meaning—Action-based Semantics (AbS)—and on a new kind of artificial agents, called two-machine artificial agents (AM²). Thanks to their architecture, AM2s implement AbS, and this allows them to ground their symbols semantically and to develop some fairly advanced semantic abilities, including the development of semantically grounded communication and the elaboration of representations, while still respecting the Z condition
Thompson, Evan (1997). Symbol grounding: A bridge from artificial life to artificial intelligence. Brain and Cognition 34 (1):48-71.   (Cited by 8 | Google | More links)
Abstract: This paper develops a bridge from AL issues about the symbol–matter relation to AI issues about symbol-grounding by focusing on the concepts of formality and syntactic interpretability. Using the DNA triplet-amino acid specification relation as a paradigm, it is argued that syntactic properties can be grounded as high-level features of the non-syntactic interactions in a physical dynamical system. This argu- ment provides the basis for a rebuttal of John Searle’s recent assertion that syntax is observer-relative (1990, 1992). But the argument as developed also challenges the classic symbol-processing theory of mind against which Searle is arguing, as well as the strong AL thesis that life is realizable in a purely computational medium. Finally, it provides a new line of support for the autonomous systems approach in AL and AI (Varela & Bourgine 1992a, 1992b). © 1997 Academic Press

6.2b Computational Semantics

Akman, Varol (1998). Situations and artificial intelligence. Minds and Machines 8 (4):475-477.   (Google)
Blackburn, Patrick & Bos, Johan (2003). Computational semantics. Theoria: Revista de Teoría, Historia y Fundamentos de la Ciencia 18 (1):27-45.   (Google)
Abstract: In this article we discuss what constitutes a good choice of semantic representation, compare different approaches of constructing semantic representations for fragments of natural language, and give an overview of recent methods for employing inference engines for natural language understanding tasks
Blackburn, Patrick & Kohlhase, Michael (2004). Inference and computational semantics. Journal of Logic, Language and Information 13 (2).   (Google)
Blackburn, Patrick (2005). Representation and Inference for Natural Language: A First Course in Computational Semantics. Center for the Study of Language and Information.   (Google)
Abstract: How can computers distinguish the coherent from the unintelligible, recognize new information in a sentence, or draw inferences from a natural language passage? Computational semantics is an exciting new field that seeks answers to these questions, and this volume is the first textbook wholly devoted to this growing subdiscipline. The book explains the underlying theoretical issues and fundamental techniques for computing semantic representations for fragments of natural language. This volume will be an essential text for computer scientists, linguists, and anyone interested in the development of computational semantics
Bogdan, Radu J. (1994). By way of means and ends. In Radu J. Bogdan (ed.), Grounds for Cognition. Lawrence Erlbaum.   (Google)
Abstract: This chapter provides the teleological foundations for our analysis of guidance to goal. Its objective is to ground goal-directedness genetically. The basic suggestion is this. Organisms are small things, with few energy resources and puny physical means, battling a ruthless physical and biological nature. How do they manage to survive and multiply? CLEVERLY, BY ORGANIZING
Bos, Johan (2004). Computational semantics in discourse: Underspecification, resolution, and inference. Journal of Logic, Language and Information 13 (2).   (Google)
Abstract: In this paper I introduce a formalism for natural language understandingbased on a computational implementation of Discourse RepresentationTheory. The formalism covers a wide variety of semantic phenomena(including scope and lexical ambiguities, anaphora and presupposition),is computationally attractive, and has a genuine inference component. Itcombines a well-established linguistic formalism (DRT) with advancedtechniques to deal with ambiguity (underspecification), and isinnovative in the use of first-order theorem proving techniques.The architecture of the formalism for natural language understandingthat I advocate consists of three levels of processing:underspecification, resolution, andinference. Each of these levels has a distinct function andtherefore employs a different kind of semantic representation. Themappings between these different representations define the interfacesbetween the levels
Charniak, Eugene & Wilks, Yorick (eds.) (1976). Computational Semantics: An Introduction to Artificial Intelligence and Natural Language Comprehension. Distributors for the U.S.A. And Canada, Elsevier/North Holland.   (Google)
Szymanik, Jakub & Zajenkowski, Marcin (2009). Comprehension of Simple Quantifiers. Empirical Evaluation of a Computational Model. Cognitive Science: A Multidisciplinary Journal 34 (3):521-532.   (Google)
Abstract: We examine the verification of simple quantifiers in natural language from a computational model perspective. We refer to previous neuropsychological investigations of the same problem and suggest extending their experimental setting. Moreover, we give some direct empirical evidence linking computational complexity predictions with cognitive reality.
In the empirical study we compare time needed for understanding different types of quantifiers. We show that the computational distinction between quantifiers recognized by finite-automata and push-down automata is psychologically relevant. Our research improves upon hypothesis and explanatory power of recent neuroimaging studies as well as provides
evidence
Dennett, Daniel C. (2003). The Baldwin Effect: A Crane, Not a Skyhook. In Bruce H. Weber & D.J. Depew (eds.), And Learning: The Baldwin Effect Reconsidered. MIT Press.   (Cited by 6 | Google | More links)
Abstract: In 1991, I included a brief discussion of the Baldwin effect in my account of the evolution of human consciousness, thinking I was introducing to non-specialist readers a little-appreciated, but no longer controversial, wrinkle in orthodox neo-Darwinism. I had thought that Hinton and Nowlan (1987) and Maynard Smith (1987) had shown clearly and succinctly how and why it worked, and restored the neglected concept to grace. Here is how I put it then
Fodor, Jerry A. (1979). In reply to Philip Johnson-Laird's What's Wrong with Grandma's Guide to Procedural Semantics: A Reply to Jerry Fodor. Cognition 7 (March):93-95.   (Google)
Fodor, Jerry A. (1978). Tom swift and his procedural grandmother. Cognition 6 (September):229-47.   (Cited by 24 | Annotation | Google)
Hadley, Robert F. (1990). Truth conditions and procedural semantics. In Philip P. Hanson (ed.), Information, Language and Cognition. University of British Columbia Press.   (Cited by 2 | Google)
Harnad, Stevan (2002). Darwin, Skinner, Turing and the mind. Magyar Pszichologiai Szemle 57 (4):521-528.   (Google | More links)
Abstract: Darwin differs from Newton and Einstein in that his ideas do not require a complicated or deep mind to understand them, and perhaps did not even require such a mind in order to generate them in the first place. It can be explained to any school-child (as Newtonian mechanics and Einsteinian relativity cannot) that living creatures are just Darwinian survival/reproduction machines. They have whatever structure they have through a combination of chance and its consequences: Chance causes changes in the genetic blueprint from which organisms' bodies are built, and if those changes are more successful in helping their owners survive and reproduce than their predecessors or their rivals, then, by definition, those changes are reproduced, and thereby become more prevalent in succeeding generations: Whatever survives/reproduces better survives/reproduces better. That is the tautological force that shaped us
Johnson-Laird, Philip N. (1977). Procedural semantics. Cognition 5:189-214.   (Cited by 37 | Google)
Johnson-Laird, Philip N. (1978). What's wrong with grandma's guide to procedural semantics: A reply to Jerry Fodor. Cognition 9 (September):249-61.   (Cited by 1 | Google)
McDermott, Drew (1978). Tarskian semantics, or no notation without denotation. Cognitive Science 2:277-82.   (Cited by 33 | Annotation | Google | More links)
Papineau, David (2006). The cultural origins of cognitive adaptations. Royal Institute of Philosophy Supplement.   (Google | More links)
Abstract: According to an influential view in contemporary cognitive science, many human cognitive capacities are innate. The primary support for this view comes from ‘poverty of stimulus’ arguments. In general outline, such arguments contrast the meagre informational input to cognitive development with its rich informational output. Consider the ease with which humans acquire languages, become facile at attributing psychological states (‘folk psychology’), gain knowledge of biological kinds (‘folk biology’), or come to understand basic physical processes (‘folk physics’). In all these cases, the evidence available to a growing child is far too thin and noisy for it to be plausible that the underlying principles involved are derived from general learning mechanisms. This only alternative hypothesis seems to be that the child’s grasp of these principles is innate. (Cf. Laurence and Margolis, 2001.)
Perlis, Donald R. (1991). Putting one's foot in one's head -- part 1: Why. Noûs 25 (September):435-55.   (Cited by 12 | Google | More links)
Perlis, Donald R. (1994). Putting one's foot in one's head -- part 2: How. In Eric Dietrich (ed.), Thinking Computers and Virtual Persons. Academic Press.   (Google)
Rapaport, William J. (1988). Syntactic semantics: Foundations of computational natural language understanding. In James H. Fetzer (ed.), Aspects of AI. Kluwer.   (Cited by 44 | Google)
Rapaport, William J. (1995). Understanding understanding: Syntactic semantics and computational cognition. Philosophical Perspectives 9:49-88.   (Cited by 22 | Google | More links)
Smith, B. (1988). On the semantics of clocks. In James H. Fetzer (ed.), Aspects of AI. Kluwer.   (Cited by 7 | Google)
Smith, B. (1987). The correspondence continuum. Csli 87.   (Cited by 34 | Google)
Szymanik, Jakub & Zajenkowski, Marcin (2009). Understanding Quantifiers in Language. In N. A. Taatgen & H. van Rijn (eds.), Proceedings of the 31st Annual Conference of the Cognitive Science Society.   (Google)
Abstract: We compare time needed for understanding different types of quantifiers. We show that the computational distinction between quantifiers recognized by finite-automata and pushdown automata is psychologically relevant. Our research improves upon hypothesis and explanatory power of recent neuroimaging studies as well as provides evidence for the claim that human linguistic abilities are constrained by computational complexity.
Tin, Erkan & Akman, Varol (1994). Computational situation theory. ACM SIGART Bulletin 5 (4):4-17.   (Cited by 15 | Google | More links)
Abstract: Situation theory has been developed over the last decade and various versions of the theory have been applied to a number of linguistic issues. However, not much work has been done in regard to its computational aspects. In this paper, we review the existing approaches towards `computational situation theory' with considerable emphasis on our own research
Wilks, Y. (1990). Form and content in semantics. Synthese 82 (3):329-51.   (Cited by 10 | Annotation | Google | More links)
Abstract:   This paper continues a strain of intellectual complaint against the presumptions of certain kinds of formal semantics (the qualification is important) and their bad effects on those areas of artificial intelligence concerned with machine understanding of human language. After some discussion of the use of the term epistemology in artificial intelligence, the paper takes as a case study the various positions held by McDermott on these issues and concludes, reluctantly, that, although he has reversed himself on the issue, there was no time at which he was right
Wilks, Y. (1982). Some thoughts on procedural semantics. In W. Lehnert (ed.), Strategies for Natural Language Processing. Lawrence Erlbaum.   (Cited by 12 | Google)
Winograd, Terry (1985). Moving the semantic fulcrum. Linguistics and Philosophy 8 (February):91-104.   (Cited by 16 | Google | More links)
Woods, W. (1986). Problems in procedural semantics. In Zenon W. Pylyshyn & W. Demopolous (eds.), Meaning and Cognitive Structure. Ablex.   (Cited by 2 | Annotation | Google)
Woods, W. (1981). Procedural semantics as a theory of meaning. In A. Joshi, Bruce H. Weber & Ivan A. Sag (eds.), Elements of Discourse Understanding. Cambridge University Press.   (Cited by 33 | Google)

6.2c Implicit/Explicit Rules and Representations

Bechtel, William P. (forthcoming). Explanation: Mechanism, modularity, and situated cognition. In P. Robbins & M. Aydede (eds.), Cambridge Handbook of Situated Cognition. Cambridge University Press.   (Google)
Abstract: The situated cognition movement has emerged in recent decades (although it has roots in psychologists working earlier in the 20th century including Vygotsky, Bartlett, and Dewey) largely in reaction to an approach to explaining cognition that tended to ignore the context in which cognitive activities typically occur. Fodor’s (1980) account of the research strategy of methodological solipsism, according to which only representational states within the mind are viewed as playing causal roles in producing cognitive activity, is an extreme characterization of this approach. (As Keith Gunderson memorably commented when Fodor first presented this characterization, it amounts to reversing behaviorism by construing the mind as a white box in a black world). Critics as far back as the 1970s and 1980s objected to many experimental paradigms in cognitive psychology as not being ecologically valid; that is, they maintained that the findings only applied to the artificial circumstances created in the laboratory and did not generalize to real world settings (Neisser, 1976; 1987). The situated cognition movement, however, goes much further than demanding ecologically valid experiments—it insists that an agent’s cognitive activities are inherently embedded and supported by dynamic interactions with the agent’s body and features of its environment
Clark, Andy (1991). In defense of explicit rules. In William Ramsey, Stephen P. Stich & D. Rumelhart (eds.), Philosophy and Connectionist Theory. Lawrence Erlbaum.   (Cited by 11 | Annotation | Google)
Cummins, Robert E. (1986). Inexplicit information. In Myles Brand & Robert M. Harnish (eds.), The Representation of Knowledge and Belief. University of Arizona Press.   (Cited by 13 | Annotation | Google)
Davies, Martin (1995). Two notions of implicit rules. Philosophical Perspectives 9:153-83.   (Cited by 14 | Google | More links)
Dennett, Daniel C. (1993). Review of F. Varela, E. Thompson and E. Rosch, The Embodied Mind. American Journal of Psychology 106:121-126.   (Google | More links)
Abstract: Cognitive science, as an interdisciplinary school of thought, may have recently moved beyond the bandwagon stage onto the throne of orthodoxy, but it does not make a favorable first impression on many people. Familiar reactions on first encounters range from revulsion to condescending dismissal--very few faces in the crowd light up with the sense of "Aha! So that's how the mind works! Of course!" Cognitive science leaves something out, it seems; moreover, what it apparently leaves out is important, even precious. Boiled down to its essence, cognitive science proclaims that in one way or another our minds are computers, and this seems so mechanistic, reductionistic, intellectualistic, dry, philistine, unbiological. It leaves out emotion, or what philosophers call qualia, or value, or mattering, or . . . the soul. It doesn't explain what minds are so much as attempt to explain minds away
Fulda, Joseph S. (2000). The logic of “improper cross”. Artificial Intelligence and Law 8 (4):337-341.   (Google)
G. , Nagarjuna (2009). Collaborative creation of teaching-learning sequences and an Atlas of knowledge. Mathematics Teaching-Research Journal Online 3 (N3):23-40.   (Google | More links)
Abstract: Our focus in the article is to introduce a simple methodology of generating teaching-learning sequences using the semantic network techinque, followed by the emergent properties of such a network and their implications for the teaching-learning process (didactics) with marginal notes on epistemological implications. A collaborative portal for teachers, which publishes a network of prerequisites for teaching/learning any concept or an activity is introduced. The article ends with an appeal to the global community to contribute prerequisites of any subject to complete the global roadmap for an altas being built on similar lines as Wikipedia. The portal is launched and waiting for community participation at http://www.gnowledge.org.
Hadley, Robert F. (1993). Connectionism, explicit rules, and symbolic manipulation. Minds and Machines 3 (2):183-200.   (Cited by 13 | Google | More links)
Abstract:   At present, the prevailing Connectionist methodology forrepresenting rules is toimplicitly embody rules in neurally-wired networks. That is, the methodology adopts the stance that rules must either be hard-wired or trained into neural structures, rather than represented via explicit symbolic structures. Even recent attempts to implementproduction systems within connectionist networks have assumed that condition-action rules (or rule schema) are to be embodied in thestructure of individual networks. Such networks must be grown or trained over a significant span of time. However, arguments are presented herein that humanssometimes follow rules which arevery rapidly assignedexplicit internal representations, and that humans possessgeneral mechanisms capable of interpreting and following such rules. In particular, arguments are presented that thespeed with which humans are able to follow rules ofnovel structure demonstrates the existence of general-purpose rule following mechanisms. It is further argued that the existence of general-purpose rule following mechanisms strongly indicates that explicit rule following is not anisolated phenomenon, but may well be a common and important aspect of cognition. The relationship of the foregoing conclusions to Smolensky''s view of explicit rule following is also explored. The arguments presented here are pragmatic in nature, and are contrasted with thekind of arguments developed by Fodor and Pylyshyn in their recent, influential paper
Hadley, Robert F. (1990). Connectionism, rule-following, and symbolic manipulation. Proc AAAI 3 (2):183-200.   (Cited by 10 | Annotation | Google)
Hadley, Robert F. (1995). The 'explicit-implicit' distinction. Minds and Machines 5 (2):219-42.   (Cited by 25 | Google | More links)
Abstract:   Much of traditional AI exemplifies the explicit representation paradigm, and during the late 1980''s a heated debate arose between the classical and connectionist camps as to whether beliefs and rules receive an explicit or implicit representation in human cognition. In a recent paper, Kirsh (1990) questions the coherence of the fundamental distinction underlying this debate. He argues that our basic intuitions concerning explicit and implicit representations are not only confused but inconsistent. Ultimately, Kirsh proposes a new formulation of the distinction, based upon the criterion ofconstant time processing.The present paper examines Kirsh''s claims. It is argued that Kirsh fails to demonstrate that our usage of explicit and implicit is seriously confused or inconsistent. Furthermore, it is argued that Kirsh''s new formulation of the explicit-implicit distinction is excessively stringent, in that it banishes virtually all sentences of natural language from the realm of explicit representation. By contrast, the present paper proposes definitions for explicit and implicit which preserve most of our strong intuitions concerning straightforward uses of these terms. It is also argued that the distinction delineated here sustains the meaningfulness of the abovementioned debate between classicists and connectionists
Kirsh, David (1990). When is information explicitly represented? In Philip P. Hanson (ed.), Information, Language and Cognition. University of British Columbia Press.   (Cited by 62 | Google)
Martínez, Fernando & Ezquerro Martínez, Jesús (1998). Explicitness with psychological ground. Minds and Machines 8 (3):353-374.   (Cited by 1 | Google | More links)
Abstract:   Explicitness has usually been approached from two points of view, labelled by Kirsh the structural and the process view, that hold opposite assumptions to determine when information is explicit. In this paper, we offer an intermediate view that retains intuitions from both of them. We establish three conditions for explicit information that preserve a structural requirement, and a notion of explicitness as a continuous dimension. A problem with the former accounts was their disconnection with psychological work on the issue. We review studies by Karmiloff-Smith, and Shanks and St. John to show that the proposed conditions have psychological grounds. Finally, we examine the problem of explicit rules in connectionist systems in the light of our framework
Shapiro, Lawrence A. (ms). The embodied cognition research program.   (Cited by 1 | Google | More links)
Abstract: Unifying traditional cognitive science is the idea that thinking is a process of symbol manipulation, where symbols lead both a syntactic and a semantic life. The syntax of a symbol comprises those properties in virtue of which the symbol undergoes rule-dictated transformations. The semantics of a symbol constitute the symbolsÕ meaning or representational content. Thought consists in the syntactically determined manipulation of symbols, but in a way that respects their semantics. Thus, for instance, a calculating computer sensitive only to the shape of symbols might produce the symbol Ô5Õ in response to the inputs Ô2Õ, Ô+Õ, and Ô3Õ. As far as the computer is concerned, these symbols have no meaning, but because of its program it will produce outputs that, to the user, Òmake senseÓ given the meanings the user attributes to the symbols
Skokowski, Paul G. (1994). Can computers carry content "inexplicitly"? Minds and Machines 4 (3):333-44.   (Cited by 2 | Annotation | Google | More links)
Abstract:   I examine whether it is possible for content relevant to a computer''s behavior to be carried without an explicit internal representation. I consider three approaches. First, an example of a chess playing computer carrying emergent content is offered from Dennett. Next I examine Cummins response to this example. Cummins says Dennett''s computer executes a rule which is inexplicitly represented. Cummins describes a process wherein a computer interprets explicit rules in its program, implements them to form a chess-playing device, then this device executes the rules in a way that exhibits them inexplicitly. Though this approach is intriguing, I argue that the chess-playing device cannot exist as imagined. The processes of interpretation and implementation produce explicit representations of the content claimed to be inexplicit. Finally, the Chinese Room argument is examined and shown not to save the notion of inexplicit information. This means the strategy of attributing inexplicit content to a computer which is executing a rule, fails
Slezak, Peter (1999). Situated cognition. Perspectives on Cognitive Science.   (Cited by 22 | Google)
Abstract: The self-advertising, at least, suggests that 'situated cognition' involves the most fundamental conceptual re-organization in AI and cognitive science, even appearing to deny that cognition is to be explained by mental representations. In their defence of the orthodox symbolic representational theory, A. Vera and H. Simon (1993) have rebutted many of these claims, but they overlook an important reading of situated arguments which may, after all, involve a revolutionary insight. I show that the whole debate turns on puzzles familiar from the history of philosophy and psychology and these may serve to clarify the current disputes
Sutton, John (2000). The body and the brain. In S. Gaukroger, J. Schuster & J. Sutton (eds.), Descartes' Natural Philosophy. Routledge.   (Google)
Abstract: Does self?knowledge help? A rationalist, presumably, thinks that it does: both that self?knowledge is possible and that, if gained through appropriate channels, it is desirable. Descartes notoriously claimed that, with appropriate methods of enquiry, each of his readers could become an expert on herself or himself. As well as the direct, first?person knowledge of self to which we are led in the Meditationes , we can also seek knowledge of our own bodies, and of the union of our minds and our bodies: the latter forms of self?knowledge are inevitably imperfect, but are no less important in guiding our conduct in the search after truth
van Gelder, Tim (1998). Review: Being There: Body and World Together Again, by Andy Clark. Philosophical Review 107 (4):647-650.   (Google)
Abstract: Are any nonhuman animals rational? What issues are we raising when we ask this question? Are there different kinds or levels of rationality, some of which fall short of full human rationality? Should any behaviour by nonhuman animals be regarded as rational? What kinds of tasks can animals successfully perform? What kinds of processes control their performance at these tasks, and do they count as rational processes? Is it useful or theoretically justified to raise questions about the rationality of animals at all? Should we be interested in whether they are rational? Why does it matter?

6.2d AI without Representation?

Andrews, Kristin (web). Critter psychology: On the possibility of nonhuman animal folk psychology. In Daniel D. Hutto & Matthew Ratcliffe (eds.), Folk Psychology Re-Assessed. Kluwer/Springer Press.   (Google | More links)
Abstract: Humans have a folk psychology, without question. Paul Churchland used the term to describe “our commonsense conception of psychological phenomena” (Churchland 1981, p. 67), whatever that may be. When we ask the question whether animals have their own folk psychology, we’re asking whether any other species has a commonsense conception of psychological phenomenon as well. Different versions of this question have been discussed over the past 25 years, but no clear answer has emerged. Perhaps one reason for this lack of progress is that we don’t clearly understand the question. In asking whether animals have folk psychology, I hope to help clarify the concept of folk psychology itself, and in the process, to gain a greater understanding of the role of belief and desire attribution in human social interaction
Bechtel, William P. (1996). Yet another revolution? Defusing the dynamical system theorists' attack on mental representations. Presidential Address to Society of Philosophy and Psychology.   (Cited by 1 | Google)
Brooks, Rodney (1991). Intelligence without representation. Artificial Intelligence 47:139-159.   (Cited by 2501 | Annotation | Google | More links)
Abstract: Artificial intelligence research has foundered on the issue of representation. When intelligence is approached in an incremental manner, with strict reliance on interfacing to the real world through perception and action, reliance on representation disappears. In this paper we outline our approach to incrementally building complete intelligent Creatures. The fundamental decomposition of the intelligent system is not into independent information processing units which must interface with each other via representations. Instead, the intelligent system is decomposed into independent and parallel activity producers which all interface directly to the world through perception and action, rather than interface to each other particularly much. The notions of central and peripheral systems evaporateeverything is both central and peripheral. Based on these principles we have built a very successful series of mobile robots which operate without supervision as Creatures in standard office environments
Clark, Andy & Toribio, Josefa (1994). Doing without representing. Synthese 101 (3):401-31.   (Cited by 97 | Annotation | Google | More links)
Abstract:   Connectionism and classicism, it generally appears, have at least this much in common: both place some notion of internal representation at the heart of a scientific study of mind. In recent years, however, a much more radical view has gained increasing popularity. This view calls into question the commitment to internal representation itself. More strikingly still, this new wave of anti-representationalism is rooted not in armchair theorizing but in practical attempts to model and understand intelligent, adaptive behavior. In this paper we first present, and then critically assess, a variety of recent anti-representationalist treatments. We suggest that so far, at least, the sceptical rhetoric outpaces both evidence and argument. Some probable causes of this premature scepticism are isolated. Nonetheless, the anti-representationalist challenge is shown to be both important and progressive insofar as it forces us to see beyond the bare representational/non-representational dichotomy and to recognize instead a rich continuum of degrees and types of representationality
Dennett, Daniel C. (1989). Cognitive ethology. In Goals, No-Goals and Own Goals. Unwin Hyman.   (Cited by 15 | Google)
Abstract: The field of Artificial Intelligence has produced so many new concepts--or at least vivid and more structured versions of old concepts--that it would be surprising if none of them turned out to be of value to students of animal behavior. Which will be most valuable? I will resist the temptation to engage in either prophecy or salesmanship; instead of attempting to answer the question: "How might Artificial Intelligence inform the study of animal behavior?" I will concentrate on the obverse: "How might the study of animal behavior inform research in Artificial Intelligence?"
Millikan, Ruth G. (online). On reading signs.   (Cited by 1 | Google | More links)
Abstract: On Reading Signs; Some Differences between Us and The Others If there are certain kinds of signs that an animal cannot learn to interpret, that might be for any of a number of reasons. It might be, first, because the animal cannot discriminate the signs from one another. For example, although human babies learn to discriminate human speech sounds according to the phonological structures of their native languages very easily, it may be that few if any other animals are capable of fully grasping the phonological structures of human languages. If an animal cannot learn to interpret certain signs it might be, second, because the decoding is too difficult for it. It could be, for example, that some animals are incapable of decoding signs that exhibit syntactic embedding, or signs that are spread out over time as opposed to over space. Problems of these various kinds might be solved by using another sign system, say, gestures rather than noises, or visual icons laid out in spatial order, or by separating out embedded propositions and presenting each separately. But a more interesting reason that an animal might be incapable of understanding a sign would be that it lacked mental representations of the necessary kind. It might be incapable of representing mentally what the sign conveys. When discussing what signs animals can understand or
Keijzer, Fred A. (1998). Doing without representations which specify what to do. Philosophical Psychology 11 (3):269-302.   (Cited by 15 | Google)
Abstract: A discussion is going on in cognitive science about the use of representations to explain how intelligent behavior is generated. In the traditional view, an organism is thought to incorporate representations. These provide an internal model that is used by the organism to instruct the motor apparatus so that the adaptive and anticipatory characteristics of behavior come about. So-called interactionists claim that this representational specification of behavior raises more problems than it solves. In their view, the notion of internal representational models is to be dispensed with. Instead, behavior is to be explained as the intricate interaction between an embodied organism and the specific make up of an environment. The problem with a non-representational interactive account is that it has severe difficulties with anticipatory, future oriented behavior. The present paper extends the interactionist conceptual framework by drawing on ideas derived from the study of morphogenesis. This extended interactionist framework is based on an analysis of anticipatory behavior as a process which involves multiple spatio-temporal scales of neural, bodily and environmental dynamics. This extended conceptual framework provides the outlines for an explanation of anticipatory behavior without involving a representational specification of future goal states
Kirsh, David (1991). Today the earwig, tomorrow man? Artificial Intelligence 47:161-184.   (Cited by 111 | Google | More links)
Abstract: A startling amount of intelligent activity can be controlled without reasoning or thought. By tuning the perceptual system to task relevant properties a creature can cope with relatively sophisticated environments without concepts. There is a limit, however, to how far a creature without concepts can go. Rod Brooks, like many ecologically oriented scientists, argues that the vast majority of intelligent behaviour is concept-free. To evaluate this position I consider what special benefits accrue to concept-using creatures. Concepts are either necessary for certain types of perception, learning, and control, or they make those processes computationally simpler. Once a creature has concepts its capacities are vastly multiplied.
Müller, Vincent C. (2007). Is there a future for AI without representation? Minds and Machines 17 (1).   (Google | More links)
Abstract: This paper investigates the prospects of Rodney Brooks’ proposal for AI without representation. It turns out that the supposedly characteristic features of “new AI” (embodiment, situatedness, absence of reasoning, and absence of representation) are all present in conventional systems: “New AI” is just like old AI. Brooks proposal boils down to the architectural rejection of central control in intelligent agents—Which, however, turns out to be crucial. Some of more recent cognitive science suggests that we might do well to dispose of the image of intelligent agents as central representation processors. If this paradigm shift is achieved, Brooks’ proposal for cognition without representation appears promising for full-blown intelligent agents—Though not for conscious agents
van Gelder, Tim (1995). What might cognition be if not computation? Journal of Philosophy 92 (7):345-81.   (Cited by 266 | Annotation | Google | More links)
Wallis, Peter (2004). Intention without representation. Philosophical Psychology 17 (2):209-223.   (Cited by 3 | Google | More links)
Abstract: A mechanism for planning ahead would appear to be essential to any creature with more than insect level intelligence. In this paper it is shown how planning, using full means-ends analysis, can be had while avoiding the so called symbol grounding problem. The key role of knowledge representation in intelligence has been acknowledged since at least the enlightenment, but the advent of the computer has made it possible to explore the limits of alternate schemes, and to explore the nature of our everyday understanding of the world around us. In particular, artificial intelligence (AI) and robotics has forced a close examination, by people other than philosophers, of what it means to say for instance that "snow is white." One interpretation of the "new AI" is that it is questioning the need for representation altogether. Brooks and others have shown how a range of intelligent behaviors can be had without representation, and this paper goes one step further showing how intending to do things can be achieved without symbolic representation. The paper gives a concrete example of a mechanism in terms of robots that play soccer. It describes a belief, desire and intention (BDI) architecture that plans in terms of activities. The result is a situated agent that plans to do things with no more ontological commitment than the reactive systems Brooks described in his seminal paper, "Intelligence without Representation."
Webber, Jonathan (2002). Doing without representation: Coping with Dreyfus. Philosophical Explorations 5 (1):82-88.   (Google | More links)
Abstract: Hubert Dreyfus argues that the traditional and currently dominant conception of an action, as an event initiated or governed by a mental representation of a possible state of affairs that the agent is trying to realise, is inadequate. If Dreyfus is right, then we need a new conception of action. I argue, however, that the considerations that Dreyfus adduces show only that an action need not be initiated or governed by a conceptual representation, but since a representation need not be conceptually structured, do not show that we need a conception of action that does not involve representation

6.2e Computation and Representation, Misc

Akman, Varol & ten Hagen, Paul J. W. (1989). The power of physical representations. AI Magazine 10 (3):49-65.   (Cited by 10 | Google | More links)
Bailey, Andrew R. (1994). Representations versus regularities: Does computation require representation? Eidos 12 (1):47-58.   (Google)
Chalmers, David J.; French, Robert M. & Hofstadter, Douglas R. (1992). High-level perception, representation, and analogy:A critique of artificial intelligence methodology. Journal of Experimental and Theoretical Artificial Intellige 4 (3):185 - 211.   (Cited by 123 | Google | More links)
Abstract: High-level perception--”the process of making sense of complex data at an abstract, conceptual level--”is fundamental to human cognition. Through high-level perception, chaotic environmen- tal stimuli are organized into the mental representations that are used throughout cognitive pro- cessing. Much work in traditional artificial intelligence has ignored the process of high-level perception, by starting with hand-coded representations. In this paper, we argue that this dis- missal of perceptual processes leads to distorted models of human cognition. We examine some existing artificial-intelligence models--”notably BACON, a model of scientific discovery, and the Structure-Mapping Engine, a model of analogical thought--”and argue that these are flawed pre- cisely because they downplay the role of high-level perception. Further, we argue that perceptu- al processes cannot be separated from other cognitive processes even in principle, and therefore that traditional artificial-intelligence models cannot be defended by supposing the existence of a --œrepresentation module--� that supplies representations ready-made. Finally, we describe a model of high-level perception and analogical thought in which perceptual processing is integrated with analogical mapping, leading to the flexible build-up of representations appropriate to a given context
Dartnall, Terry (2000). Reverse psychologism, cognition and content. Minds and Machines 10 (1):31-52.   (Cited by 32 | Google | More links)
Abstract:   The confusion between cognitive states and the content of cognitive states that gives rise to psychologism also gives rise to reverse psychologism. Weak reverse psychologism says that we can study cognitive states by studying content – for instance, that we can study the mind by studying linguistics or logic. This attitude is endemic in cognitive science and linguistic theory. Strong reverse psychologism says that we can generate cognitive states by giving computers representations that express the content of cognitive states and that play a role in causing appropriate behaviour. This gives us strong representational, classical AI (REPSCAI), and I argue that it cannot succeed. This is not, as Searle claims in his Chinese Room Argument, because syntactic manipulation cannot generate content. Syntactic manipulation can generate content, and this is abundantly clear in the Chinese Room scenano. REPSCAI cannot succeed because inner content is not sufficient for cognition, even when the representations that carry the content play a role in generating appropriate behaviour
Dietrich, Eric (1988). Computers, intentionality, and the new dualism. Computers and Philosophy Newsletter.   (Google)
Dreyfus, Hubert L. (1979). A framework for misrepresenting knowledge. In Martin Ringle (ed.), Philosophical Perspectives in Artificial Intelligence. Humanities Press.   (Cited by 7 | Annotation | Google)
Echavarria, Ricardo Restrepo (2009). Russell's structuralism and the supposed death of computational cognitive science. Minds and Machines 19 (2).   (Google)
Abstract: John Searle believes that computational properties are purely formal and that consequently, computational properties are not intrinsic, empirically discoverable, nor causal; and therefore, that an entity’s having certain computational properties could not be sufficient for its having certain mental properties. To make his case, Searle’s employs an argument that had been used before him by Max Newman, against Russell’s structuralism; one that Russell himself considered fatal to his own position. This paper formulates a not-so-explored version of Searle’s problem with computational cognitive science, and refutes it by suggesting how our understanding of computation is far from implying the structuralism Searle vitally attributes to it. On the way, I formulate and argue for a thesis that strengthens Newman’s case against Russell’s structuralism, and thus raises the apparent risk for computational cognitive science too
Fields, Christopher A. (1994). Real machines and virtual intentionality: An experimentalist takes on the problem of representational content. In Eric Dietrich (ed.), Thinking Computers and Virtual Persons. Academic Press.   (Google)
Franklin, James, The representation of context: Ideas from artificial intelligence.   (Google)
Abstract: To move beyond vague platitudes about the importance of context in legal reasoning or natural language understanding, one must take account of ideas from artificial intelligence on how to represent context formally. Work on topics like prior probabilities, the theory-ladenness of observation, encyclopedic knowledge for disambiguation in language translation and pathology test diagnosis has produced a body of knowledge on how to represent context in artificial intelligence applications
Fulda, Joseph S. (2000). The logic of “improper cross”. Artificial Intelligence and Law 8 (4):337-341.   (Google)
Garzon, Francisco Calvo & Rodriguez, Angel Garcia (2009). Where is cognitive science heading? Minds and Machines.   (Google)
Abstract: According to Ramsey (Representation reconsidered, Cambridge University Press, New York, 2007), only classical cognitive science, with the related notions of input–output and structural representations, meets the job description challenge (the challenge to show that a certain structure or process serves a representational role at the subpersonal level). By contrast, connectionism and other nonclassical models, insofar as they exploit receptor and tacit notions of representation, are not genuinely representational. As a result, Ramsey submits, cognitive science is taking a U-turn from representationalism back to behaviourism, thus presupposing that (1) the emergence of cognitivism capitalized on the concept of representation, and that (2) the materialization of nonclassical cognitive science involves a return to some form of pre-cognitivist behaviourism. We argue against both (1) and (2), by questioning Ramsey’s divide between classical and representational, versus nonclassical and nonrepresentational, cognitive models. For, firstly, connectionist and other nonclassical accounts have the resources to exploit the notion of a structural isomorphism, like classical accounts (the beefing-up strategy); and, secondly, insofar as input–output and structural representations refer to a cognitive agent, classical explanations fail to meet the job description challenge (the deflationary strategy). Both strategies work independently of each other: if the deflationary strategy succeeds, contra (1), cognitivism has failed to capitalize on the relevant concept of representation; if the beefing-up strategy is sound, contra (2), the return to a pre-cognitivist era cancels out.
Guvenir, Halil A. & Akman, Varol (1992). Problem representation for refinement. Minds and Machines 2 (3):267-282.   (Google | More links)
Abstract:   In this paper we attempt to develop a problem representation technique which enables the decomposition of a problem into subproblems such that their solution in sequence constitutes a strategy for solving the problem. An important issue here is that the subproblems generated should be easier than the main problem. We propose to represent a set of problem states by a statement which is true for all the members of the set. A statement itself is just a set of atomic statements which are binary predicates on state variables. Then, the statement representing the set of goal states can be partitioned into its subsets each of which becomes a subgoal of the resulting strategy. The techniques involved in partitioning a goal into its subgoals are presented with examples
Haugeland, John (1981). Semantic engines: An introduction to mind design. In J. Haugel (ed.), Mind Design. MIT Press.   (Cited by 92 | Google)
Marsh, Leslie (2005). Review Essay: Andy Clark's Natural-Born Cyborgs: Minds, Technologies, and the Future of Human Intelligence_. Cognitive Systems Research 6:405-409.   (Google)
Abstract: The notion of the cyborg has exercised the popular imagination for almost two hundred years. In very general terms the idea that a living entity can be a hybrid of both organic matter and mechanical parts, and for all intents and purposes be seamlessly functional and self-regulating, was prefigured in literary works such as Shellys Frankenstein (1816/18) and Samuel Butlers Erewhon (1872). This notion of hybridism has been a staple theme of 20th century science fiction writing, television programmes and the cinema. For the most part, these works trade on a deep sense of unease we have about our personal identity – how could some non-organic matter to which I have so little conscious access count as a bona fide part of me? Cognitive scientist and philosopher, Andy Clark, picks up this general theme and presents an empirical and philosophical case for the following inextricably linked theses.
Prem, Erich (2000). Changes of representational AI concepts induced by embodied autonomy. Communication and Cognition-Artificial Intelligence 17 (3-4):189-208.   (Cited by 4 | Google)
Robinson, William S. (1995). Direct representation. Philosophical Studies 80 (3):305-22.   (Cited by 3 | Annotation | Google | More links)
Shani, Itay (2005). Computation and intentionality: A recipe for epistemic impasse. Minds and Machines 15 (2):207-228.   (Cited by 1 | Google | More links)
Abstract: Searle’s celebrated Chinese room thought experiment was devised as an attempted refutation of the view that appropriately programmed digital computers literally are the possessors of genuine mental states. A standard reply to Searle, known as the “robot reply” (which, I argue, reflects the dominant approach to the problem of content in contemporary philosophy of mind), consists of the claim that the problem he raises can be solved by supplementing the computational device with some “appropriate” environmental hookups. I argue that not only does Searle himself casts doubt on the adequacy of this idea by applying to it a slightly revised version of his original argument, but that the weakness of this encoding-based approach to the problem of intentionality can also be exposed from a somewhat different angle. Capitalizing on the work of several authors and, in particular, on that of psychologist Mark Bickhard, I argue that the existence of symbol-world correspondence is not a property that the cognitive system itself can appreciate, from its own perspective, by interacting with the symbol and therefore, not a property that can constitute intrinsic content. The foundational crisis to which Searle alluded is, I conclude, very much alive
Stanley, Jason (2005). Review of Robyn Carston, Thoughts and Utterances. Mind and Language 20 (3).   (Google)
Abstract: Relevance Theory is the influential theory of linguistic interpretation first championed by Dan Sperber and Deirdre Wilson. Relevance theorists have made important contributions to our understanding of a wide range of constructions, especially constructions that tend to receive less attention in semantics and philosophy of language. But advocates of Relevance Theory also have had a tendency to form a rather closed community, with an unwillingness to translate their own special vocabulary and distinctions into more neutral vernacular. Since Robyn Carston has long been the advocate of Relevance Theory most able to communicate with a broader philosophical and linguistic audience, it is with particular interest that the emergence of her long-awaited volume, Thoughts and Utterances has been greeted. The volume exhibits many of the strengths, but also some of the weaknesses, of this well-known program
Thornton, Chris (1997). Brave mobots use representation: Emergence of representation in fight-or-flight learning. Minds and Machines 7 (4):475-494.   (Cited by 10 | Google | More links)
Abstract:   The paper uses ideas from Machine Learning, Artificial Intelligence and Genetic Algorithms to provide a model of the development of a fight-or-flight response in a simulated agent. The modelled development process involves (simulated) processes of evolution, learning and representation development. The main value of the model is that it provides an illustration of how simple learning processes may lead to the formation of structures which can be given a representational interpretation. It also shows how these may form the infrastructure for closely-coupled agent/environment interaction