Javascript Menu by Deluxe-Menu.com
MindPapers is now part of PhilPapers: online research in philosophy, a new service with many more features.
 
 Compiled by David Chalmers (Editor) & David Bourget (Assistant Editor), Australian National University. Submit an entry.
 
   
click here for help on how to search

6.2a. Symbols and Symbol Systems (Symbols and Symbol Systems on PhilPapers)

See also:
Boyle, C. Franklin (2001). Transduction and degree of grounding. Psycoloquy 12 (36).   (Cited by 2 | Google | More links)
Abstract: While I agree in general with Stevan Harnad's symbol grounding proposal, I do not believe "transduction" (or "analog process") PER SE is useful in distinguishing between what might best be described as different "degrees" of grounding and, hence, for determining whether a particular system might be capable of cognition. By 'degrees of grounding' I mean whether the effects of grounding go "all the way through" or not. Why is transduction limited in this regard? Because transduction is a physical process which does not speak to the issue of representation, and, therefore, does not explain HOW the informational aspects of signals impinging on sensory surfaces become embodied as symbols or HOW those symbols subsequently cause behavior, both of which, I believe, are important to grounding and to a system's cognitive capacity. Immunity to Searle's Chinese Room (CR) argument does not ensure that a particular system is cognitive, and whether or not a particular degree of groundedness enables a system to pass the Total Turing Test (TTT) may never be determined
Bringsjord, Selmer (online). People are infinitary symbol systems: No sensorimotor capacity necessary.   (Cited by 2 | Google | More links)
Abstract: Stevan Harnad and I seem to be thinking about many of the same issues. Sometimes we agree, sometimes we don't; but I always find his reasoning refreshing, his positions sensible, and the problems with which he's concerned to be of central importance to cognitive science. His "Grounding Symbols in the Analog World with Neural Nets" (= GS) is no exception. And GS not only exemplifies Harnad's virtues, it also provides a springboard for diving into Harnad- Bringsjord terrain
Clark, Andy (2006). Material symbols. Philosophical Psychology 19 (3):291-307.   (Cited by 4 | Google | More links)
Abstract: What is the relation between the material, conventional symbol structures that we encounter in the spoken and written word, and human thought? A common assumption, that structures a wide variety of otherwise competing views, is that the way in which these material, conventional symbol-structures do their work is by being translated into some kind of content-matching inner code. One alternative to this view is the tempting but thoroughly elusive idea that we somehow think in some natural language (such as English). In the present treatment I explore a third option, which I shall call the "complementarity" view of language. According to this third view the actual symbol structures of a given language add cognitive value by complementing (without being replicated by) the more basic modes of operation and representation endemic to the biological brain. The "cognitive bonus" that language brings is, on this model, not to be cashed out either via the ultimately mysterious notion of "thinking in a given natural language" or via some process of exhaustive translation into another inner code. Instead, we should try to think in terms of a kind of coordination dynamics in which the forms and structures of a language qua material symbol system play a key and irreducible role. Understanding language as a complementary cognitive resource is, I argue, an important part of the much larger project (sometimes glossed in terms of the "extended mind") of understanding human cognition as essentially and multiply hybrid: as involving a complex interplay between internal biological resources and external non-biological resources
Cummins, Robert E. (1996). Why there is no symbol grounding problem? In Representations, Targets, and Attitudes. MIT Press.   (Google)
Harnad, Stevan (1992). Connecting object to symbol in modeling cognition. In A. Clark & Ronald Lutz (eds.), Connectionism in Context. Springer-Verlag.   (Cited by 61 | Annotation | Google | More links)
Abstract: Connectionism and computationalism are currently vying for hegemony in cognitive modeling. At first glance the opposition seems incoherent, because connectionism is itself computational, but the form of computationalism that has been the prime candidate for encoding the "language of thought" has been symbolic computationalism (Dietrich 1990, Fodor 1975, Harnad 1990c; Newell 1980; Pylyshyn 1984), whereas connectionism is nonsymbolic (Fodor & Pylyshyn 1988, or, as some have hopefully dubbed it, "subsymbolic" Smolensky 1988). This paper will examine what is and is not a symbol system. A hybrid nonsymbolic/symbolic system will be sketched in which the meanings of the symbols are grounded bottom-up in the system's capacity to discriminate and identify the objects they refer to. Neural nets are one possible mechanism for learning the invariants in the analog sensory projection on which successful categorization is based. "Categorical perception" (Harnad 1987a), in which similarity space is "warped" in the service of categorization, turns out to be exhibited by both people and nets, and may mediate the constraints exerted by the analog world of objects on the formal world of symbols
Harnad, Stevan (2002). Symbol grounding and the origin of language. In Matthias Scheutz (ed.), Computationalism: New Directions. MIT Press.   (Cited by 12 | Google | More links)
Abstract: What language allows us to do is to "steal" categories quickly and effortlessly through hearsay instead of having to earn them the hard way, through risky and time-consuming sensorimotor "toil" (trial-and-error learning, guided by corrective feedback from the consequences of miscategorisation). To make such linguistic "theft" possible, however, some, at least, of the denoting symbols of language must first be grounded in categories that have been earned through sensorimotor toil (or else in categories that have already been "prepared" for us through Darwinian theft by the genes of our ancestors); it cannot be linguistic theft all the way down. The symbols that denote categories must be grounded in the capacity to sort, label and interact with the proximal sensorimotor projections of their distal category-members in a way that coheres systematically with their semantic interpretations, both for individual symbols, and for symbols strung together to express truth-value-bearing propositions
Harnad, Stevan (ms). Symbol grounding is an empirical problem: Neural nets are just a candidate component.   (Cited by 27 | Google | More links)
Abstract: "Symbol Grounding" is beginning to mean too many things to too many people. My own construal has always been simple: Cognition cannot be just computation, because computation is just the systematically interpretable manipulation of meaningless symbols, whereas the meanings of my thoughts don't depend on their interpretability or interpretation by someone else. On pain of infinite regress, then, symbol meanings must be grounded in something other than just their interpretability if they are to be candidates for what is going on in our heads. Neural nets may be one way to ground the names of concrete objects and events in the capacity to categorize them (by learning the invariants in their sensorimotor projections). These grounded elementary symbols could then be combined into symbol strings expressing propositions about more abstract categories. Grounding does not equal meaning, however, and does not solve any philosophical problems
Harnad, Stevan (1990). The symbol grounding problem. [Journal (Paginated)] 42:335-346.   (Cited by 1265 | Annotation | Google | More links)
Abstract: There has been much discussion recently about the scope and limits of purely symbolic models of the mind and about the proper role of connectionism in cognitive modeling. This paper describes the symbol grounding problem: How can the semantic interpretation of a formal symbol system be made intrinsic to the system, rather than just parasitic on the meanings in our heads? How can the meanings of the meaningless symbol tokens, manipulated solely on the basis of their (arbitrary) shapes, be grounded in anything but other meaningless symbols? The problem is analogous to trying to learn Chinese from a Chinese/Chinese dictionary alone. A candidate solution is sketched: Symbolic representations must be grounded bottom-up in nonsymbolic representations of two kinds: (1) iconic representations, which are analogs of the proximal sensory projections of distal objects and events, and (2) categorical representations, which are learned and innate feature-detectors that pick out the invariant features of object and event categories from their sensory projections. Elementary symbols are the names of these object and event categories, assigned on the basis of their (nonsymbolic) categorical representations. Higher-order (3) symbolic representations, grounded in these elementary symbols, consist of symbol strings describing category membership relations (e.g., An X is a Y that is Z). Connectionism is one natural candidate for the mechanism that learns the invariant features underlying categorical representations, thereby connecting names to the proximal projections of the distal objects they stand for. In this way connectionism can be seen as a complementary component in a hybrid nonsymbolic/symbolic model of the mind, rather than a rival to purely symbolic modeling. Such a hybrid model would not have an autonomous symbolic module, however; the symbolic functions would emerge as an intrinsically dedicated symbol system as a consequence of the bottom-up grounding of categories' names in their sensory representations. Symbol manipulation would be governed not just by the arbitrary shapes of the symbol tokens, but by the nonarbitrary shapes of the icons and category invariants in which they are grounded
Kosslyn, Stephen M. & Hatfield, Gary (1984). Representation without symbol systems. Social Research 51:1019-1045.   (Cited by 15 | Google)
Lumsden, David (2005). How can a symbol system come into being? Dialogue 44 (1):87-96.   (Google)
Abstract: One holistic thesis about symbols is that a symbol cannot exist singly, but only as apart of a symbol system. There is also the plausible view that symbol systems emerge gradually in an individual, in a group, and in a species. The problem is that symbol holism makes it hard to see how a symbol system can emerge gradually, at least if we are considering the emergence of a first symbol system. The only way it seems possible is if being a symbol can be a matter of degree, which is initially problematic. This article explains how being a cognitive symbol can be a matter of degree after all. The contrary intuition arises from the way a process of interpretation forces an all-or-nothing character on symbols, leaving room for underlying material to realize symbols to different degrees in a way that Daniel Dennett’s work can help illuminate. Holism applies to symbols as interpreted, while gradualism applies to how the underlying material realizes symbols.Selon une thèse holistique sur les symboles, un symbole ne peut exister isolément mais doit faire partie d’un systéme symbolique. Une opinion, elle aussi plausible, veut que les systèmes symboliques émergent graduellement chez un individu, un groupe ou une espèce. Le problème c’est qu’on voit mal, si le holisme des systèmes symboliques tient, comment un système symbolique peut émerger graduellement, du moins pour la première fois. Ce n’est possible, semble-t-il, que si être un symbole est affaire de degré, thèse au départ problématique. Cet article explique comment être un symbole cognitif peut après tout être affaire de degré. L’intuition contraire vient de ce que le processus d’interprétation nous force au tout ou rien, ce qui laisse un jeu dans la façon dont le matériel sous-jacent réalise les symboles à des degrés divers. Les travaux de Daniel Dennett sont à cet égard éclairants. Le holisme vaut pour les symboles tels qu’ils sont interprétés, tandis que le gradualisme vaut pour la façon dont le matériel sous-jacent réalise les symboles
MacDorman, Karl F. (1997). How to ground symbols adaptively. In S. O'Nuillain, Paul McKevitt & E. MacAogain (eds.), Two Sciences of Mind. John Benjamins.   (Cited by 1 | Google)
Newell, Allen & Simon, Herbert A. (1981). Computer science as empirical inquiry: Symbols and search. Communications of the Association for Computing Machinery 19:113-26.   (Cited by 758 | Annotation | Google | More links)
Newell, Allen (1980). Physical symbol systems. Cognitive Science 4:135-83.   (Cited by 469 | Google | More links)
Pinker, Steven (2004). Why nature & nurture won't go away. Daedalus.   (Cited by 7 | Google | More links)
Robinson, William S. (1995). Brain symbols and computationalist explanation. Minds and Machines 5 (1):25-44.   (Cited by 4 | Google | More links)
Abstract:   Computationalist theories of mind require brain symbols, that is, neural events that represent kinds or instances of kinds. Standard models of computation require multiple inscriptions of symbols with the same representational content. The satisfaction of two conditions makes it easy to see how this requirement is met in computers, but we have no reason to think that these conditions are satisfied in the brain. Thus, if we wish to give computationalist explanations of human cognition, without committing ourselvesa priori to a strong and unsupported claim in neuroscience, we must first either explain how we can provide multiple brain symbols with the same content, or explain how we can abandon standard models of computation. It is argued that both of these alternatives require us to explain the execution of complex tasks that have a cognition-like structure. Circularity or regress are thus threatened, unless noncomputationalist principles can provide the required explanations. But in the latter case, we do not know that noncomputationalist principles might not bear most of the weight of explaining cognition. Four possible types of computationalist theory are discussed; none appears to provide a promising solution to the problem. Thus, despite known difficulties in noncomputationalist investigations, we have every reason to pursue the search for noncomputationalist principles in cognitive theory
Roitblat, Herbert L. (2001). Computational grounding. Psycoloquy 12 (58).   (Cited by 1 | Google | More links)
Abstract: Harnad defines computation to mean the manipulation of physical symbol tokens on the basis of syntactic rules defined over the shapes of the symbols, independent of what, if anything, those symbols represent. He is, of course, free to define terms in any way that he chooses, and he is very clear about what he means by computation, but I am uncomfortable with this definition. It excludes, at least at a functional level of description, much of what a computer is actually used for, and much of what the brain/mind does. When I toss a Frisbee to the neighbor's dog, the dog does not, I think, engage in a symbolic soliloquy about the trajectory of the disc, the wind's effects on it, and formulas for including lift and the acceleration due to gravity. There are symbolic formulas for each of these relations, but the dog insofar as I can tell, does not use any of these formulas. Nevertheless, it computes these factors in order to intercept the disc in the air. I argue that determining the solution to a differential equation is at least as much computation as is processing symbols. The disagreement is over what counts as computation, I think that Harnad and I both agree that the dog solves the trajectory problem implicitly. This definition is important, because, although Harnad offers a technical definition for what he means by computation, the folk- definition of the term is probably interpreted differently, and I believe this leads to trouble
Schneider, Susan (2009). Lot, ctm, and the elephant in the room. Synthese 170 (2):235-250.   (Google | More links)
Abstract: According to the language of thought (LOT) approach and the related computational theory of mind (CTM), thinking is the processing of symbols in an inner mental language that is distinct from any public language. Herein, I explore a deep problem at the heart of the LOT/CTM program—it has yet to provide a plausible conception of a mental symbol
Schneider, Susan (forthcoming). The nature of primitive symbols in the language of thought. Mind and Language.   (Google | More links)
Abstract: This paper provides a theory of the nature of symbols in the language of thought (LOT). My discussion consists in three parts. In part one, I provide three arguments for the individuation of primitive symbols in terms of total computational role. The first of these arguments claims that Classicism requires that primitive symbols be typed in this manner; no other theory of typing will suffice. The second argument contends that without this manner of symbol individuation, there will be computational processes that fail to supervene on syntax, together with the rules of composition and the computational algorithms. The third argument says that cognitive science needs a natural kind that is typed by total computational role. Otherwise, either cognitive science will be incomplete, or its laws will have counterexamples. Then, part two defends this view from a criticism, offered by both Jerry Fodor and Jesse Prinz, who respond to my view with the charge that because the types themselves are individuated
Sun, Ron (2000). Symbol grounding: A new look at an old idea. Philosophical Psychology 13 (2):149-172.   (Cited by 39 | Google | More links)
Abstract: Symbols should be grounded, as has been argued before. But we insist that they should be grounded not only in subsymbolic activities, but also in the interaction between the agent and the world. The point is that concepts are not formed in isolation (from the world), in abstraction, or "objectively." They are formed in relation to the experience of agents, through their perceptual/motor apparatuses, in their world and linked to their goals and actions. This paper takes a detailed look at this relatively old issue, with a new perspective, aided by our work of computational cognitive model development. To further our understanding, we also go back in time to link up with earlier philosophical theories related to this issue. The result is an account that extends from computational mechanisms to philosophical abstractions
Taddeo, Mariarosaria & Floridi, Luciano (2008). A praxical solution of the symbol grounding problem. Minds and Machines.   (Google | More links)
Abstract: This article is the second step in our research into the Symbol Grounding Problem (SGP). In a previous work, we defined the main condition that must be satisfied by any strategy in order to provide a valid solution to the SGP, namely the zero semantic commitment condition (Z condition). We then showed that all the main strategies proposed so far fail to satisfy the Z condition, although they provide several important lessons to be followed by any new proposal. Here, we develop a new solution of the SGP. It is called praxical in order to stress the key role played by the interactions between the agents and their environment. It is based on a new theory of meaning—Action-based Semantics (AbS)—and on a new kind of artificial agents, called two-machine artificial agents (AM²). Thanks to their architecture, AM2s implement AbS, and this allows them to ground their symbols semantically and to develop some fairly advanced semantic abilities, including the development of semantically grounded communication and the elaboration of representations, while still respecting the Z condition
Thompson, Evan (1997). Symbol grounding: A bridge from artificial life to artificial intelligence. Brain and Cognition 34 (1):48-71.   (Cited by 8 | Google | More links)
Abstract: This paper develops a bridge from AL issues about the symbol–matter relation to AI issues about symbol-grounding by focusing on the concepts of formality and syntactic interpretability. Using the DNA triplet-amino acid specification relation as a paradigm, it is argued that syntactic properties can be grounded as high-level features of the non-syntactic interactions in a physical dynamical system. This argu- ment provides the basis for a rebuttal of John Searle’s recent assertion that syntax is observer-relative (1990, 1992). But the argument as developed also challenges the classic symbol-processing theory of mind against which Searle is arguing, as well as the strong AL thesis that life is realizable in a purely computational medium. Finally, it provides a new line of support for the autonomous systems approach in AL and AI (Varela & Bourgine 1992a, 1992b). © 1997 Academic Press