Javascript Menu by Deluxe-Menu.com
MindPapers is now part of PhilPapers: online research in philosophy, a new service with many more features.
 
 Compiled by David Chalmers (Editor) & David Bourget (Assistant Editor), Australian National University. Submit an entry.
 
   
click here for help on how to search

6.3e. Subsymbolic Computation (Subsymbolic Computation on PhilPapers)

See also:
Berkeley, Istvan S. N. (2006). Moving the goal posts: A reply to Dawson and Piercey. Minds and Machines 16 (4):471-478.   (Google | More links)
Abstract: Berkeley [Minds Machines 10 (2000) 1] described a methodology that showed the subsymbolic nature of an artificial neural network system that had been trained on a logic problem, originally described by Bechtel and Abrahamsen [Connectionism and the mind. Blackwells, Cambridge, MA, 1991]. It was also claimed in the conclusion of this paper that the evidence was suggestive that the network might, in fact, count as a symbolic system. Dawson and Piercey [Minds Machines 11 (2001) 197] took issue with this latter claim. They described some lesioning studies that they argued showed that Berkeley’s (2000) conclusions were premature. In this paper, these lesioning studies are replicated and it is shown that the effects that Dawson and Piercey rely upon for their argument are merely an artifact of a threshold function they chose to employ. When a threshold function much closer to that deployed in the original studies is used, the significant effects disappear
Berkeley, Istvan S. N. (2000). What the #$*%! Is a subsymbol? Minds and Machines 10 (1):1-13.   (Cited by 5 | Google | More links)
Abstract:   In 1988, Smolensky proposed that connectionist processing systems should be understood as operating at what he termed the `subsymbolic'' level. Subsymbolic systems should be understood by comparing them to symbolic systems, in Smolensky''s view. Up until recently, there have been real problems with analyzing and interpreting the operation of connectionist systems which have undergone training. However, recently published work on a network trained on a set of logic problems originally studied by Bechtel and Abrahamsen (1991) seems to offer the potential to provide a detailed, empirically based answer to questions about the nature of subsymbols. In this paper, a network analysis procedure and the results obtained using it are discussed. This provides the basis for an insight into the nature of subsymbols, which is surprising
Chalmers, David J. (1992). Subsymbolic computation and the chinese room. In J. Dinsmore (ed.), The Symbolic and Connectionist Paradigms: Closing the Gap. Lawrence Erlbaum.   (Cited by 29 | Annotation | Google | More links)
Abstract: More than a decade ago, philosopher John Searle started a long-running controversy with his paper “Minds, Brains, and Programs” (Searle, 1980a), an attack on the ambitious claims of artificial intelligence (AI). With his now famous _Chinese Room_ argument, Searle claimed to show that despite the best efforts of AI researchers, a computer could never recreate such vital properties of human mentality as intentionality, subjectivity, and understanding. The AI research program is based on the underlying assumption that all important aspects of human cognition may in principle be captured in a computational model. This assumption stems from the belief that beyond a certain level, implementational details are irrelevant to cognition. According to this belief, neurons, and biological wetware in general, have no preferred status as the substrate for a mind. As it happens, the best examples of minds we have at present have arisen from a carbon-based substrate, but this is due to constraints of evolution and possibly historical accidents, rather than to an absolute metaphysical necessity. As a result of this belief, many cognitive scientists have chosen to focus not on the biological substrate of the mind, but instead on the abstract causal structure_ _that the mind embodies (at an appropriate level of abstraction). The view that it is abstract causal structure that is essential to mentality has been an implicit assumption of the AI research program since Turing (1950), but was first articulated explicitly, in various forms, by Putnam (1960), Armstrong (1970) and Lewis (1970), and has become known as _functionalism_. From here, it is a very short step to _computationalism_, the view that computational structure is what is important in capturing the essence of mentality. This step follows from a belief that any abstract causal structure can be captured computationally: a belief made plausible by the Church–Turing Thesis, which articulates the power
Clark, Andy (1993). Superpositional connectionism: A reply to Marinov. Minds and Machines 3 (3):271-81.   (Cited by 2 | Google | More links)
Abstract:   Marinov''s critique I argue, is vitiated by its failure to recognize the distinctive role of superposition within the distributed connectionist paradigm. The use of so-called subsymbolic distributed encodings alone is not, I agree, enough to justify treating distributed connectionism as a distinctive approach. It has always been clear that microfeatural decomposition is both possible and actual within the confines of recognizably classical approaches. When such approaches also involve statistically-driven learning algorithms — as in the case of ID3 — the fundamental differences become even harder to spot. To see them, it is necessary to consider not just the nature of an acquired input-output function but the nature of the representational scheme underlying it. Differences between such schemes make themselves best felt outside the domain of immediate problem solving. It is in the more extended contexts of performance DURING learning and cognitive change as a result of SUBSEQUENT training on new tasks (or simultaneous training on several tasks) that the effects of superpositional storage techniques come to the fore. I conclude that subsymbols, distribution and statistically driven learning alone are indeed not of the essence. But connectionism is not just about subsymbols and distribution. It is about the generation of whole subsymbol SYSTEMS in which multiple distributed representations are created and superposed
Cleeremans, Axel (1998). The other hard problem: How to bridge the gap between subsymbolic and symbolic cognition. Behavioral and Brain Sciences 21 (1):22-23.   (Google | More links)
Abstract: The constructivist notion that features are purely functional is incompatible with the classical computational metaphor of mind. I suggest that the discontent expressed by Schyns, Goldstone and Thibaut about fixed-features theories of categorization reflects the growing impact of connectionism, and show how their perspective is similar to recent research on implicit learning, consciousness, and development. A hard problem remains, however: How to bridge the gap between subsymbolic and symbolic cognition
Hofstadter, Douglas R. (1983). Artificial intelligence: Subcognition as computation. In Fritz Machlup (ed.), The Study of Information: Interdisciplinary Messages. Wiley.   (Cited by 12 | Annotation | Google)
Marinov, Marin (1993). On the spuriousness of the symbolic/subsymbolic distinction. Minds and Machines 3 (3):253-70.   (Cited by 2 | Annotation | Google | More links)
Abstract:   The article criticises the attempt to establish connectionism as an alternative theory of human cognitive architecture through the introduction of thesymbolic/subsymbolic distinction (Smolensky, 1988). The reasons for the introduction of this distinction are discussed and found to be unconvincing. It is shown that thebrittleness problem has been solved for a large class ofsymbolic learning systems, e.g. the class oftop-down induction of decision-trees (TDIDT) learning systems. Also, the process of articulating expert knowledge in rules seems quite practical for many important domains, including common sense knowledge.The article discusses several experimental comparisons betweenTDIDT systems and artificial neural networks using the error backpropagation algorithm (ANNs usingBP). The properties of one of theTDIDT systemsID3 (Quinlan, 1986a) are examined in detail. It is argued that the differences in performance betweenANNs usingBP andTDIDT systems reflect slightly different inductive biases but are not systematic; these differences do not support the view that symbolic and subsymbolic systems are fundamentally incompatible. It is concluded, that thesymbolic/subsymbolic distinction is spurious. It cannot establish connectionism as an alternative cognitive architecture
Rosenberg, Jay F. (1990). Treating connectionism properly: Reflections on Smolensky. Psychological Research 52.   (Cited by 5 | Annotation | Google)
Smolensky, Paul (1987). Connectionist, symbolic, and the brain. AI Review 1:95-109.   (Cited by 18 | Annotation | Google)
Smolensky, Paul (1988). On the proper treatment of connectionism. Behavioral and Brain Sciences 11:1-23.   (Cited by 902 | Annotation | Google)