This mode searches for entries containing all the entered words in their title, author, date, comment field, or in any of many other fields showing on OPC pages.
This mode searches for entries containing the text string you entered in their author field. Note that the database does not have first names for all authors, so it is preferable to search only by surnames. If you search for a full name or a name with an initial, enter it in the format used internally, namely the "Lastname, Firstname" or "Lastname, F." format.
This mode differs from the all fields mode in two respects. First, some information not publicly available on the site is searched, e.g., abstracts and excerpts gathered by the crawler, which are not always accurate but can help broaden one's search. Second, you may prefix any term with a '+' or '-' to narrow the search to entries containing it or not containing it, respectively. Terms which are not prefixed by a '+' are not mandatory. Instead, they are weighed depending on their frequency in order to determine the best search results. You may also search for a literal string composed of several words by putting them in double quotation marks (").
Note that short and / or common words are ignored by the search engine.
Try PhilPapers to find published items which are available on a subscription basis.
Abstract: While I agree in general with Stevan Harnad's symbol grounding proposal, I do not believe "transduction" (or "analog process") PER SE is useful in distinguishing between what might best be described as different "degrees" of grounding and, hence, for determining whether a particular system might be capable of cognition. By 'degrees of grounding' I mean whether the effects of grounding go "all the way through" or not. Why is transduction limited in this regard? Because transduction is a physical process which does not speak to the issue of representation, and, therefore, does not explain HOW the informational aspects of signals impinging on sensory surfaces become embodied as symbols or HOW those symbols subsequently cause behavior, both of which, I believe, are important to grounding and to a system's cognitive capacity. Immunity to Searle's Chinese Room (CR) argument does not ensure that a particular system is cognitive, and whether or not a particular degree of groundedness enables a system to pass the Total Turing Test (TTT) may never be determined
Abstract: Stevan Harnad and I seem to be thinking about many of the same issues. Sometimes we agree, sometimes we don't; but I always find his reasoning refreshing, his positions sensible, and the problems with which he's concerned to be of central importance to cognitive science. His "Grounding Symbols in the Analog World with Neural Nets" (= GS) is no exception. And GS not only exemplifies Harnad's virtues, it also provides a springboard for diving into Harnad- Bringsjord terrain
Abstract: What is the relation between the material, conventional symbol structures that we encounter in the spoken and written word, and human thought? A common assumption, that structures a wide variety of otherwise competing views, is that the way in which these material, conventional symbol-structures do their work is by being translated into some kind of content-matching inner code. One alternative to this view is the tempting but thoroughly elusive idea that we somehow think in some natural language (such as English). In the present treatment I explore a third option, which I shall call the "complementarity" view of language. According to this third view the actual symbol structures of a given language add cognitive value by complementing (without being replicated by) the more basic modes of operation and representation endemic to the biological brain. The "cognitive bonus" that language brings is, on this model, not to be cashed out either via the ultimately mysterious notion of "thinking in a given natural language" or via some process of exhaustive translation into another inner code. Instead, we should try to think in terms of a kind of coordination dynamics in which the forms and structures of a language qua material symbol system play a key and irreducible role. Understanding language as a complementary cognitive resource is, I argue, an important part of the much larger project (sometimes glossed in terms of the "extended mind") of understanding human cognition as essentially and multiply hybrid: as involving a complex interplay between internal biological resources and external non-biological resources
Abstract: Connectionism and computationalism are currently vying for hegemony in cognitive modeling. At first glance the opposition seems incoherent, because connectionism is itself computational, but the form of computationalism that has been the prime candidate for encoding the "language of thought" has been symbolic computationalism (Dietrich 1990, Fodor 1975, Harnad 1990c; Newell 1980; Pylyshyn 1984), whereas connectionism is nonsymbolic (Fodor & Pylyshyn 1988, or, as some have hopefully dubbed it, "subsymbolic" Smolensky 1988). This paper will examine what is and is not a symbol system. A hybrid nonsymbolic/symbolic system will be sketched in which the meanings of the symbols are grounded bottom-up in the system's capacity to discriminate and identify the objects they refer to. Neural nets are one possible mechanism for learning the invariants in the analog sensory projection on which successful categorization is based. "Categorical perception" (Harnad 1987a), in which similarity space is "warped" in the service of categorization, turns out to be exhibited by both people and nets, and may mediate the constraints exerted by the analog world of objects on the formal world of symbols
Abstract: What language allows us to do is to "steal" categories quickly and effortlessly through hearsay instead of having to earn them the hard way, through risky and time-consuming sensorimotor "toil" (trial-and-error learning, guided by corrective feedback from the consequences of miscategorisation). To make such linguistic "theft" possible, however, some, at least, of the denoting symbols of language must first be grounded in categories that have been earned through sensorimotor toil (or else in categories that have already been "prepared" for us through Darwinian theft by the genes of our ancestors); it cannot be linguistic theft all the way down. The symbols that denote categories must be grounded in the capacity to sort, label and interact with the proximal sensorimotor projections of their distal category-members in a way that coheres systematically with their semantic interpretations, both for individual symbols, and for symbols strung together to express truth-value-bearing propositions
Abstract: "Symbol Grounding" is beginning to mean too many things to too many people. My own construal has always been simple: Cognition cannot be just computation, because computation is just the systematically interpretable manipulation of meaningless symbols, whereas the meanings of my thoughts don't depend on their interpretability or interpretation by someone else. On pain of infinite regress, then, symbol meanings must be grounded in something other than just their interpretability if they are to be candidates for what is going on in our heads. Neural nets may be one way to ground the names of concrete objects and events in the capacity to categorize them (by learning the invariants in their sensorimotor projections). These grounded elementary symbols could then be combined into symbol strings expressing propositions about more abstract categories. Grounding does not equal meaning, however, and does not solve any philosophical problems
Abstract: There has been much discussion recently about the scope and limits of purely symbolic models of the mind and about the proper role of connectionism in cognitive modeling. This paper describes the symbol grounding problem: How can the semantic interpretation of a formal symbol system be made intrinsic to the system, rather than just parasitic on the meanings in our heads? How can the meanings of the meaningless symbol tokens, manipulated solely on the basis of their (arbitrary) shapes, be grounded in anything but other meaningless symbols? The problem is analogous to trying to learn Chinese from a Chinese/Chinese dictionary alone. A candidate solution is sketched: Symbolic representations must be grounded bottom-up in nonsymbolic representations of two kinds: (1) iconic representations, which are analogs of the proximal sensory projections of distal objects and events, and (2) categorical representations, which are learned and innate feature-detectors that pick out the invariant features of object and event categories from their sensory projections. Elementary symbols are the names of these object and event categories, assigned on the basis of their (nonsymbolic) categorical representations. Higher-order (3) symbolic representations, grounded in these elementary symbols, consist of symbol strings describing category membership relations (e.g., An X is a Y that is Z). Connectionism is one natural candidate for the mechanism that learns the invariant features underlying categorical representations, thereby connecting names to the proximal projections of the distal objects they stand for. In this way connectionism can be seen as a complementary component in a hybrid nonsymbolic/symbolic model of the mind, rather than a rival to purely symbolic modeling. Such a hybrid model would not have an autonomous symbolic module, however; the symbolic functions would emerge as an intrinsically dedicated symbol system as a consequence of the bottom-up grounding of categories' names in their sensory representations. Symbol manipulation would be governed not just by the arbitrary shapes of the symbol tokens, but by the nonarbitrary shapes of the icons and category invariants in which they are grounded
Abstract: Harnad defines computation to mean the manipulation of physical symbol tokens on the basis of syntactic rules defined over the shapes of the symbols, independent of what, if anything, those symbols represent. He is, of course, free to define terms in any way that he chooses, and he is very clear about what he means by computation, but I am uncomfortable with this definition. It excludes, at least at a functional level of description, much of what a computer is actually used for, and much of what the brain/mind does. When I toss a Frisbee to the neighbor's dog, the dog does not, I think, engage in a symbolic soliloquy about the trajectory of the disc, the wind's effects on it, and formulas for including lift and the acceleration due to gravity. There are symbolic formulas for each of these relations, but the dog insofar as I can tell, does not use any of these formulas. Nevertheless, it computes these factors in order to intercept the disc in the air. I argue that determining the solution to a differential equation is at least as much computation as is processing symbols. The disagreement is over what counts as computation, I think that Harnad and I both agree that the dog solves the trajectory problem implicitly. This definition is important, because, although Harnad offers a technical definition for what he means by computation, the folk- definition of the term is probably interpreted differently, and I believe this leads to trouble
Abstract: According to the language of thought (LOT) approach and the related computational theory of mind (CTM), thinking is the processing of symbols in an inner mental language that is distinct from any public language. Herein, I explore a deep problem at the heart of the LOT/CTM program—it has yet to provide a plausible conception of a mental symbol
Abstract: This paper provides a theory of the nature of symbols in the language of thought (LOT). My discussion consists in three parts. In part one, I provide three arguments for the individuation of primitive symbols in terms of total computational role. The first of these arguments claims that Classicism requires that primitive symbols be typed in this manner; no other theory of typing will suffice. The second argument contends that without this manner of symbol individuation, there will be computational processes that fail to supervene on syntax, together with the rules of composition and the computational algorithms. The third argument says that cognitive science needs a natural kind that is typed by total computational role. Otherwise, either cognitive science will be incomplete, or its laws will have counterexamples. Then, part two defends this view from a criticism, offered by both Jerry Fodor and Jesse Prinz, who respond to my view with the charge that because the types themselves are individuated
Abstract: Symbols should be grounded, as has been argued before. But we insist that they should be grounded not only in subsymbolic activities, but also in the interaction between the agent and the world. The point is that concepts are not formed in isolation (from the world), in abstraction, or "objectively." They are formed in relation to the experience of agents, through their perceptual/motor apparatuses, in their world and linked to their goals and actions. This paper takes a detailed look at this relatively old issue, with a new perspective, aided by our work of computational cognitive model development. To further our understanding, we also go back in time to link up with earlier philosophical theories related to this issue. The result is an account that extends from computational mechanisms to philosophical abstractions
Abstract: This article is the second step in our research into the Symbol Grounding Problem (SGP). In a previous work, we defined the main condition that must be satisfied by any strategy in order to provide a valid solution to the SGP, namely the zero semantic commitment condition (Z condition). We then showed that all the main strategies proposed so far fail to satisfy the Z condition, although they provide several important lessons to be followed by any new proposal. Here, we develop a new solution of the SGP. It is called praxical in order to stress the key role played by the interactions between the agents and their environment. It is based on a new theory of meaning—Action-based Semantics (AbS)—and on a new kind of artificial agents, called two-machine artificial agents (AM²). Thanks to their architecture, AM2s implement AbS, and this allows them to ground their symbols semantically and to develop some fairly advanced semantic abilities, including the development of semantically grounded communication and the elaboration of representations, while still respecting the Z condition