Javascript Menu by Deluxe-Menu.com
MindPapers is now part of PhilPapers: online research in philosophy, a new service with many more features.
 
 Compiled by David Chalmers (Editor) & David Bourget (Assistant Editor), Australian National University. Submit an entry.
 
   
click here for help on how to search

6.3. Philosophy of Connectionism (Philosophy of Connectionism on PhilPapers)

6.3a Connectionism and Compositionality

54 / 55 entries displayed

Aizawa, Kenneth (1997). Explaining systematicity. Mind and Language 12 (2):115-36.   (Cited by 48 | Google | More links)
Aizawa, Kenneth (1997). Exhibiting verses explaining systematicity: A reply to Hadley and Hayward. Minds and Machines 7 (1):39-55.   (Google | More links)
Aizawa, Kenneth (1997). The role of the systematicity argument in classicism and connectionism. In S. O'Nuallain (ed.), Two Sciences of Mind. John Benjamins.   (Cited by 4 | Google)
Aizawa, Kenneth (2003). The Systematicity Arguments. Kluwer.   (Cited by 4 | Google)
Abstract: The Systematicity Arguments is the only book-length treatment of the systematicity and productivity arguments.
Antony, Michael V. (1991). Fodor and Pylyshyn on connectionism. Minds and Machines 1 (3):321-41.   (Cited by 3 | Annotation | Google | More links)
Abstract:   Fodor and Pylyshyn (1988) have argued that the cognitive architecture is not Connectionist. Their argument takes the following form: (1) the cognitive architecture is Classical; (2) Classicalism and Connectionism are incompatible; (3) therefore the cognitive architecture is not Connectionist. In this essay I argue that Fodor and Pylyshyn's defenses of (1) and (2) are inadequate. Their argument for (1), based on their claim that Classicalism best explains the systematicity of cognitive capacities, is an invalid instance of inference to the best explanation. And their argument for (2) turns out to be question-begging. The upshot is that, while Fodor and Pylyshyn have presented Connectionists with the important empirical challenge of explaining systematicity, they have failed to provide sufficient reason for inferring that the cognitive architecture is Classical and not Connectionist
Aydede, Murat (1997). Language of thought: The connectionist contribution. Minds and Machines 7 (1):57-101.   (Cited by 15 | Google | More links)
Abstract:   Fodor and Pylyshyn's critique of connectionism has posed a challenge to connectionists: Adequately explain such nomological regularities as systematicity and productivity without postulating a "language of thought" (LOT). Some connectionists like Smolensky took the challenge very seriously, and attempted to meet it by developing models that were supposed to be non-classical. At the core of these attempts lies the claim that connectionist models can provide a representational system with a combinatorial syntax and processes sensitive to syntactic structure. They are not implementation models because, it is claimed, the way they obtain syntax and structure sensitivity is not "concatenative," hence "radically different" from the way classicists handle them. In this paper, I offer an analysis of what it is to physically satisfy/realize a formal system. In this context, I examine the minimal truth-conditions of LOT Hypothesis. From my analysis it will follow that concatenative realization of formal systems is irrelevant to LOTH since the very notion of LOT is indifferent to such an implementation level issue as concatenation. I will conclude that to the extent to which they can explain the law-like cognitive regularities, a certain class of connectionist models proposed as radical alternatives to the classical LOT paradigm will in fact turn out to be LOT models, even though new and potentially very exciting ones
Butler, Keith (1993). Connectionism, classical cognitivism, and the relation between cognitive and implementational levels of analysis. Philosophical Psychology 6 (3):321-33.   (Cited by 6 | Annotation | Google)
Abstract: This paper discusses the relation between cognitive and implementational levels of analysis. Chalmers (1990, 1993) argues that a connectionist implementation of a classical cognitive architecture possesses a compositional semantics, and therefore undercuts Fodor and Pylyshyn's (1988) argument that connectionist networks cannot possess a compositional semantics. I argue that Chalmers argument misconstrues the relation between cognitive and implementational levels of analysis. This paper clarifies the distinction, and shows that while Fodor and Pylyshyn's argument survives Chalmers' critique, it cannot be used to establish the irrelevance of neurophysiological implementation to cognitive modeling; some aspects of Chater and Oaksford's (1990) response to Fodor and Pylyshyn, though not all, are therefore cogent
Butler, Keith (1995). Compositionality in cognitive models: The real issue. Philosophical Studies 78 (2):153-62.   (Cited by 3 | Google | More links)
Butler, Keith (1993). On Clark on systematicity and connectionism. British Journal for the Philosophy of Science 44 (1):37-44.   (Cited by 1 | Annotation | Google | More links)
Butler, Keith (1991). Towards a connectionist cognitive architecture. Mind and Language 6 (3):252-72.   (Cited by 12 | Annotation | Google | More links)
Chater, Nick & Oaksford, Mike (1990). Autonomy, implementation and cognitive architecture: A reply to Fodor and Pylyshyn. Cognition 34:93-107.   (Cited by 63 | Annotation | Google)
Chalmers, David J. (1993). Connectionism and compositionality: Why Fodor and Pylyshyn were wrong. Philosophical Psychology 6 (3):305-319.   (Annotation | Google)
Abstract: This paper offers both a theoretical and an experimental perspective on the relationship between connectionist and Classical (symbol-processing) models. Firstly, a serious flaw in Fodor and Pylyshyn’s argument against connectionism is pointed out: if, in fact, a part of their argument is valid, then it establishes a conclusion quite different from that which they intend, a conclusion which is demonstrably false. The source of this flaw is traced to an underestimation of the differences between localist and distributed representation. It has been claimed that distributed representations cannot support systematic operations, or that if they can, then they will be mere implementations of traditional ideas. This paper presents experimental evidence against this conclusion: distributed representations can be used to support direct structure-sensitive operations, in a man- ner quite unlike the Classical approach. Finally, it is argued that even if Fodor and Pylyshyn’s argument that connectionist models of compositionality must be mere implementations were correct, then this would still not be a serious argument against connectionism as a theory of mind
Chalmers, David J. (online). Deep systematicity and connectionist representation.   (Google | More links)
Abstract: 1. I think that by emphasizing theoretical spaces of representations, Andy has put his finger on an issue that is key to connectionism's success, and whose investigation will be a key determinant of the field's further progress. I also think that if we look at representational spaces in the right way, we can see that they are deeply related to classical phenomenon of systematicity in representation. I want to argue that the key to understanding representational spaces, and in particular their ability to capture the deep organization underlying various problems, lies in the idea of what I will call
Chalmers, David J. (1990). Syntactic transformations on distributed representations. Connection Science 2:53-62.   (Cited by 180 | Annotation | Google | More links)
Abstract: There has been much interest in the possibility of connectionist models whose representations can be endowed with compositional structure, and a variety of such models have been proposed. These models typically use distributed representations that arise from the functional composition of constituent parts. Functional composition and decomposition alone, however, yield only an implementation of classical symbolic theories. This paper explores the possibility of moving beyond implementation by exploiting holistic structure-sensitive operations on distributed representations. An experiment is performed using Pollack’s Recursive Auto-Associative Memory. RAAM is used to construct distributed representations of syntactically structured sentences. A feed-forward network is then trained to operate directly on these representations, modeling syn- tactic transformations of the represented sentences. Successful training and generalization is obtained, demonstrating that the implicit structure present in these representations can be used for a kind of structure-sensitive processing unique to the connectionist domain
Christiansen, M. H. & Chater, Nick (1994). Generalization and connectionist language learning. Mind and Language 9:273-87.   (Cited by 45 | Google | More links)
Cummins, Robert E. (1996). Systematicity. Journal of Philosophy 93 (12):591-614.   (Cited by 14 | Google | More links)
Davis, Wayne A. (2005). On begging the systematicity question. Journal of Philosophical Research 30:399-404.   (Google)
Fetzer, James H. (1992). Connectionism and cognition: Why Fodor and Pylyshyn are wrong. In A. Clark & Ronald Lutz (eds.), Connectionism in Context. Springer-Verlag.   (Cited by 7 | Google)
Fodor, Jerry A. & Pylyshyn, Zenon W. (1988). Connectionism and cognitive architecture. Cognition 28:3-71.   (Cited by 1496 | Annotation | Google | More links)
Abstract: This paper explores the difference between Connectionist proposals for cognitive a r c h i t e c t u r e a n d t h e s o r t s o f m o d e l s t hat have traditionally been assum e d i n c o g n i t i v e s c i e n c e . W e c l a i m t h a t t h e m a j o r d i s t i n c t i o n i s t h a t , w h i l e b o t h Connectionist and Classical architectures postulate representational mental states, the latter but not the former are committed to a symbol-level of representation, or to a ‘language of thought’: i.e., to representational states that have combinatorial syntactic and semantic structure. Several arguments for combinatorial structure in mental representations are then reviewed. These include arguments based on the ‘systematicity’ of mental representation: i.e., on the fact that cognitive capacities always exhibit certain symmetries, so that the ability to entertain a given thought implies the ability to entertain thoughts with semantically related contents. We claim that such arguments make a powerful case that mind/brain architecture is not Connectionist at the cognitive level. We then consider the possibility that Connectionism may provide an account of the neural (or ‘abstract neurological’) structures in which Classical cognitive architecture is implemented. We survey a n u m b e r o f t h e s t a n d a r d a r g u m e n t s t h a t h a v e b e e n o f f e r e d i n f a v o r o f Connectionism, and conclude that they are coherent only on this interpretation
Fodor, Jerry A. & McLaughlin, Brian P. (1990). Connectionism and the problem of systematicity: Why Smolensky's solution doesn't work. Cognition 35:183-205.   (Cited by 193 | Annotation | Google)
Fodor, Jerry A. (1997). Connectionism and the problem of systematicity (continued): Why Smolensky's solution still doesn't work. Cognition 62:109-19.   (Cited by 25 | Google | More links)
Garfield, Jay L. (1997). Mentalese not spoken here: Computation, cognition, and causation. Philosophical Psychology 10 (4):413-35.   (Cited by 38 | Google)
Abstract: Classical computational modellers of mind urge that the mind is something like a von Neumann computer operating over a system of symbols constituting a language of thought. Such an architecture, they argue, presents us with the best explanation of the compositionality, systematicity and productivity of thought. The language of thought hypothesis is supported by additional independent arguments made popular by Jerry Fodor. Paul Smolensky has developed a connectionist architecture he claims adequately explains compositionality, systematicity and productivity without positing any language of thought, and without positing any operations over a set of symbols. This architecture encodes the information represented in linguistic trees without explicitly representing those trees or their constituents, and indeed without employing any representational vehicles with constituent structure. In a recent article, Fodor (1997; Connectionism and systematicity, Cognition , 62, 109-119) argues that Smolensky's proposal does not work. I defend Smolensky against Fodor's attack, and use this interchange as a vehicle for exploring and criticising the “Language of Thought” hypothesis more generally and the arguments Fodor adduces on its behalf
Garcia-Carpintero, Manuel (1996). Two spurious varieties of compositionality. Minds and Machines 6 (2):159-72.   (Google | More links)
Abstract:   The paper examines an alleged distinction claimed to exist by Van Gelder between two different, but equally acceptable ways of accounting for the systematicity of cognitive output (two varieties of compositionality): concatenative compositionality vs. functional compositionality. The second is supposed to provide an explanation alternative to the Language of Thought Hypothesis. I contend that, if the definition of concatenative compositionality is taken in a different way from the official one given by Van Gelder (but one suggested by some of his formulations) then there is indeed a different sort of compositionality; however, the second variety is not an alternative to the language of thought in that case. On the other hand, if the concept of concatenative compositionality is taken in a different way, along the lines of Van Gelder's explicit definition, then there is no reason to think that there is an alternative way of explaining systematicity
Guarini, Marcello (1996). Tensor products and split-level architecture: Foundational issues in the classicism-connectionism debate. Philosophy of Science 63 (3):S239-S247.   (Google | More links)
Hadley, Robert F. (1997). Cognition, systematicity, and nomic necessity. Mind and Language 12 (2):137-53.   (Cited by 12 | Google | More links)
Hadley, Robert F. (1997). Explaining systematicity: A reply to Kenneth Aizawa. Minds and Machines 12 (4):571-79.   (Cited by 3 | Google | More links)
Abstract:   In his discussion of results which I (with Michael Hayward) recently reported in this journal, Kenneth Aizawa takes issue with two of our conclusions, which are: (a) that our connectionist model provides a basis for explaining systematicity within the realm of sentence comprehension, and subject to a limited range of syntax (b) that the model does not employ structure-sensitive processing, and that this is clearly true in the early stages of the network''s training. Ultimately, Aizawa rejects both (a) and (b) for reasons which I think are ill-founded. In what follows, I offer a defense of our position. In particular, I argue (1) that Aizawa adopts a standard of explanation that many accepted scientific explanations could not meet, and (2) that Aizawa misconstrues the relevant meaning of structure-sensitive process
Hadley, Robert F. (1994). Systematicity in connectionist language learning. Mind and Language 9:247-72.   (Cited by 74 | Annotation | Google | More links)
Hadley, Robert F. (1994). Systematicity revisited. Mind and Language 9:431-44.   (Cited by 34 | Google | More links)
Hadley, Robert F. & Hayward, M. B. (1997). Strong semantic systematicity from Hebbian connectionist learning. Minds and Machines 7 (1):1-55.   (Cited by 46 | Google | More links)
Abstract:   Fodor's and Pylyshyn's stand on systematicity in thought and language has been debated and criticized. Van Gelder and Niklasson, among others, have argued that Fodor and Pylyshyn offer no precise definition of systematicity. However, our concern here is with a learning based formulation of that concept. In particular, Hadley has proposed that a network exhibits strong semantic systematicity when, as a result of training, it can assign appropriate meaning representations to novel sentences (both simple and embedded) which contain words in syntactic positions they did not occupy during training. The experience of researchers indicates that strong systematicity in any form is difficult to achieve in connectionist systems.Herein we describe a network which displays strong semantic systematicity in response to Hebbian, connectionist training. During training, two-thirds of all nouns are presented only in a single syntactic position (either as grammatical subject or object). Yet, during testing, the network correctly interprets thousands of sentences containing those nouns in novel positions. In addition, the network generalizes to novel levels of embedding. Successful training requires a, corpus of about 1000 sentences, and network training is quite rapid. The architecture and learning algorithms are purely connectionist, but classical insights are discernible in one respect, viz, that complex semantic representations spatially contain their semantic constituents. However, in other important respects, the architecture is distinctly non-classical
Haselager, W. F. G. & Van Rappard, J. F. H. (1998). Connectionism, systematicity, and the frame problem. Minds and Machines 8 (2):161-179.   (Cited by 11 | Google | More links)
Abstract:   This paper investigates connectionism's potential to solve the frame problem. The frame problem arises in the context of modelling the human ability to see the relevant consequences of events in a situation. It has been claimed to be unsolvable for classical cognitive science, but easily manageable for connectionism. We will focus on a representational approach to the frame problem which advocates the use of intrinsic representations. We argue that although connectionism's distributed representations may look promising from this perspective, doubts can be raised about the potential of distributed representations to allow large amounts of complexly structured information to be adequately encoded and processed. It is questionable whether connectionist models that are claimed to effectively represent structured information can be scaled up to a realistic extent. We conclude that the frame problem provides a difficulty to connectionism that is no less serious than the obstacle it constitutes for classical cognitive science
Hawthorne, John (1989). On the compatibility of connectionist and classical models. Philosophical Psychology 2 (1):5-16.   (Cited by 9 | Annotation | Google)
Abstract: This paper presents considerations in favour of the view that traditional (classical) architectures can be seen as emergent features of connectionist networks with distributed representation. A recent paper by William Bechtel (1988) which argues for a similar conclusion is unsatisfactory in that it fails to consider whether the compositional syntax and semantics attributed to mental representations by classical models can emerge within a connectionist network. The compatibility of the two paradigms hinges largely, I suggest, on how this question is answered. Focusing on the issue of syntax, I argue that while such structure is lacking in connectionist models with local representation, it can be accommodated within networks where representation is distributed. I discuss an important paper by Smolenski (1988) which attempts to show how connectionists can incorporate the relevant syntactic structure, suggesting that some criticisms levelled against that paper by Fodor & Pylyshyn (1988) are wanting. I then go on to indicate a strategy by which a compositional syntax and semantics can be defined for the sort of network that Smolenski describes. I conclude that since the connectionist can respect the central tenets of classicism, the two approaches are compatible with one another
Horgan, Terence E. & Tienson, John L. (1991). Structured representations in connectionist systems? In S. Davis (ed.), Connectionism: Theorye and Practice. Oup.   (Cited by 9 | Annotation | Google)
Macdonald, Cynthia (1995). Classicism V connectionism. In C. Macdonald & Graham F. Macdonald (eds.), Connectionism: Debates on Psychological Explanation. Cambridge: Blackwell.   (Google)
Matthews, Robert J. (1997). Can connectionists explain systematicity? Mind and Language 12 (2):154-77.   (Cited by 9 | Google | More links)
Matthews, Robert J. (1994). Three-concept Monte: Explanation, implementation, and systematicity. Synthese 101 (3):347-63.   (Cited by 12 | Annotation | Google | More links)
Abstract:   Fodor and Pylyshyn (1988), Fodor and McLaughlin (1990) and McLaughlin (1993) challenge connectionists to explain systematicity without simply implementing a classical architecture. In this paper I argue that what makes the challenge difficult for connectionists to meet has less to do with what is to be explained than with what is to count as an explanation. Fodor et al. are prepared to admit as explanatory, accounts of a sort that only classical models can provide. If connectionists are to meet the challenge, they are going to have to insist on the propriety of changing what counts as an explanation of systematicity. Once that is done, there would seem to be as yet no reason to suppose that connectionists are unable to explain systematicity
McLaughlin, Brian P. (1992). Systematicity, conceptual truth, and evolution. Philosophy and the Cognitive Sciences 34:217-234.   (Cited by 13 | Annotation | Google)
McLaughlin, Brian P. (1993). The connectionism/classicism battle to win souls. Philosophical Studies 71 (2):163-190.   (Cited by 19 | Annotation | Google | More links)
Niklasson, L. F. & van Gelder, Tim (1994). On being systematically connectionist. Mind and Language 9:288-302.   (Cited by 42 | Google | More links)
Abstract: In 1988 Fodor and Pylyshyn issued a challenge to the newly-popular connectionism: explain the systematicity of cognition without merely implementing a so-called classical architecture. Since that time quite a number of connectionist models have been put forward, either by their designers or by others, as in some measure demonstrating that the challenge can be met (e.g., Pollack, 1988, 1990; Smolensky, 1990; Chalmers, 1990; Niklasson and Sharkey, 1992; Brousse, 1993). Unfortu- nately, it has generally been unclear whether these models actually do have this implication (see, for instance, the extensive philosophical debate in Smolensky, 1988; Fodor and McLaughlin, 1990; van Gelder, 1990, 1991; McLaughlin, 1993a, 1993b; Clark, 1993). Indeed, we know of no major supporter of classical orthodoxy who has felt compelled, by connectionist models and argu- ments, to concede in print that connectionists have in fact delivered a non-classical explanation of systematicity
Petersen, Steven E. & Roskies, Adina L. (2001). Visualizing human brain function. In E. Bizzi, P. Calissano & V. Volterra (eds.), Frontiers of Life, Vol III: The Intelligent Systems, Part One: The Brain of Homo Sapiens. Academic Press.   (Google)
Abstract: Running head: Functional neuroimaging Abstract Several recently developed techniques enable the investigation of the neural basis of cognitive function in the human brain. Two of these, PET and fMRI, yield whole-brain images reflecting regional neural activity associated with the performance of specific tasks. This article explores the spatial and temporal capabilities and limitations of these techniques, and discusses technical, biological, and cognitive issues relevant to understanding the goals and methods of neuroimaging studies. The types of advances in understanding cognitive and brain function made possible with these methods are illustrated with examples from the neuroimaging literature
Phillips, Stephen H. (2002). Does classicism explain universality? Minds and Machines 12 (3):423-434.   (Cited by 1 | Google | More links)
Abstract:   One of the hallmarks of human cognition is the capacity to generalize over arbitrary constituents. Recently, Marcus (1998, 1998a, b; Cognition 66, p. 153; Cognitive Psychology 37, p. 243) argued that this capacity, called universal generalization (universality), is not supported by Connectionist models. Instead, universality is best explained by Classical symbol systems, with Connectionism as its implementation. Here it is argued that universality is also a problem for Classicism in that the syntax-sensitive rules that are supposed to provide causal explanations of mental processes are either too strict, precluding possible generalizations; or too lax, providing no information as to the appropriate alternative. Consequently, universality is not explained by a Classical theory
Plate, Tony A. (2003). Holographic Reduced Representation: Distributed Representation for Cognitive Structures. Center for the Study of Language and Information.   (Cited by 18 | Google)
Pollack, Jordan B. (1990). Recursive distributed representations. Artificial Intelligence 46:77-105.   (Cited by 539 | Annotation | Google | More links)
Rowlands, Mark (1994). Connectionism and the language of thought. British Journal for the Philosophy of Science 45 (2):485-503.   (Annotation | Google | More links)
Abstract: In an influential critique, Jerry Fodor and Zenon Pylyshyn point to the existence of a potentially devastating dilemma for connectionism (Fodor and Pylyshyn [1988]). Either connectionist models consist in mere associations of unstructured representations, or they consist in processes involving complex representations. If the former, connectionism is mere associationism, and will not be capable of accounting for very much of cognition. If the latter, then connectionist models concern only the implementation of cognitive processes, and are, therefore, not informative at the level of cognition. I shall argue that Fodor and Pylyshyn's argument is based on a crucial misunderstanding, the same misunderstanding which motivates the entire language of thought hypothesis
Schroder, Jurgen (1998). Knowledge of rules, causal systematicity, and the language of thought. Synthese 117 (3):313-330.   (Google | More links)
Abstract:   Martin Davies' criterion for the knowledge of implicit rules, viz. the causal systematicity of cognitive processes, is first exposed. Then the inference from causal systematicity of a process to syntactic properties of the input states is examined. It is argued that Davies' notion of a syntactic property is too weak to bear the conclusion that causal systematicity implies a language of thought as far as the input states are concerned. Next, it is shown that Davies' criterion leads to a counterintuitive consequence: it groups together distributed connectionist systems with look-up tables. To avoid this consequence, a modified construal of causal systematicity is proposed and Davies' argument for the causal systematicity of thought is shown to be question-begging. It is briefly sketched how the modified construal links up with multiple dispositions of the same categorical base. Finally, the question of the causal efficacy of single rules is distinguished from the question of their psychological reality: implicit rules might be psychologically real without being causally efficacious
Smolensky, Paul (1991). Connectionism, constituency and the language of thought. In Barry M. Loewer & Georges Rey (eds.), Meaning in Mind: Fodor and His Critics. Blackwell.   (Cited by 68 | Annotation | Google)
Smolensky, Paul (1995). Constituent structure and explanation in an integrated connectionist/symbolic cognitive architecture. In C. Macdonald (ed.), Connectionism: Debates on Psychological Explanation. Blackwell.   (Cited by 51 | Google)
Smolensky, Paul (1987). The constituent structure of connectionist mental states. Southern Journal of Philosophy Supplement 26:137-60.   (Cited by 2 | Annotation | Google)
Smolensky, Paul (1990). Tensor product variable binding and the representation of symbolic structures in connectionist systems. Artificial Intelligence 46:159-216.   (Cited by 335 | Annotation | Google | More links)
van Gelder, Tim (1990). Compositionality: A connectionist variation on a classical theme. Cognitive Science 14:355-84.   (Cited by 187 | Annotation | Google | More links)
van Gelder, Tim (online). Can connectionist models exhibit non-classical structure sensitivity?   (Google | More links)
Abstract: Department of Computer Science Philosophy Program, Research School of Social Sciences University of Skövde, S-54128, SWEDEN Australian National University, Canberra ACT 0200
van Gelder, Tim (1991). Classical questions, radical answers. In Terence E. Horgan & John L. Tienson (eds.), Connectionism and the Philosophy of Mind. Kluwer.   (Cited by 20 | Annotation | Google)
van Gelder, Tim (1994). On being systematically connectionist. Mind and Language 9:288-30.   (Cited by 2 | Google | More links)
Abstract: In 1988 Fodor and Pylyshyn issued a challenge to the newly-popular connectionism: explain the systematicity of cognition without merely implementing a so-called classical architecture. Since that time quite a number of connectionist models have been put forward, either by their designers or by others, as in some measure demonstrating that the challenge can be met (e.g., Pollack, 1988, 1990; Smolensky, 1990; Chalmers, 1990; Niklasson and Sharkey, 1992; Brousse, 1993). Unfortu- nately, it has generally been unclear whether these models actually do have this implication (see, for instance, the extensive philosophical debate in Smolensky, 1988; Fodor and McLaughlin, 1990; van Gelder, 1990, 1991; McLaughlin, 1993a, 1993b; Clark, 1993). Indeed, we know of no major supporter of classical orthodoxy who has felt compelled, by connectionist models and argu- ments, to concede in print that connectionists have in fact delivered a non-classical explanation of systematicity
Waskan, Jonathan A. & Bechtel, William P. (1997). Directions in connectionist research: Tractable computations without syntactically structured representations. Metaphilosophy 28 (1-2):31-62.   (Cited by 1 | Google | More links)
Abstract: Figure 1: A pr ototyp ical exa mple of a three-layer feed forward network, used by Plunkett and M archm an (1 991 ) to simulate learning the past-tense of En glish verbs. The inpu t units encode representations of the three phonemes of the present tense of the artificial words used in this simulation. Th e netwo rk is trained to produce a representation of the phonemes employed in the past tense form and the suffix (/d/, /ed/, or /t/) used on regular verbs. To run the network, each input unit is assigned an activation value o f 0 or 1 , dep ending on whethe r the featu re is present or not. Eac h input unit is conne cted to each of the 30 hidden units by a we ighted conn ection and p rovid es an inp ut to each hidden unit equal to the product of the input unit's activation and the weight. Each hidd en unit's activation is then determined by summing ov er the va lues co ming fro m each inp ut unit to deter mine a netinput, and then applying a non-linear function (e.g., the logistic function 1/(1+enetinput). Th is whole proced ure is
Young, Robert M. (1970). Mind, Brain and Adaptation.   (Cited by 7 | Google | More links)

6.3b Representation in Connectionism

Bechtel, William P. (1994). Natural deduction in connectionist systems. Synthese 101 (3):433-463.   (Cited by 7 | Google | More links)
Abstract:   The relation between logic and thought has long been controversial, but has recently influenced theorizing about the nature of mental processes in cognitive science. One prominent tradition argues that to explain the systematicity of thought we must posit syntactically structured representations inside the cognitive system which can be operated upon by structure sensitive rules similar to those employed in systems of natural deduction. I have argued elsewhere that the systematicity of human thought might better be explained as resulting from the fact that we have learned natural languages which are themselves syntactically structured. According to this view, symbols of natural language are external to the cognitive processing system and what the cognitive system must learn to do is produce and comprehend such symbols. In this paper I pursue that idea by arguing that ability in natural deduction itself may rely on pattern recognition abilities that enable us to operate on external symbols rather than encodings of rules that might be applied to internal representations. To support this suggestion, I present a series of experiments with connectionist networks that have been trained to construct simple natural deductions in sentential logic. These networks not only succeed in reconstructing the derivations on which they have been trained, but in constructing new derivations that are only similar to the ones on which they have been trained
Butler, Keith (1995). Representation and computation in a deflationary assessment of connectionist cognitive science. Synthese 104 (1):71-97.   (Google | More links)
Abstract:   Connectionism provides hope for unifying work in neuroscience, computer science, and cognitive psychology. This promise has met with some resistance from Classical Computionalists, which may have inspired Connectionists to retaliate with bold, inflationary claims on behalf of Connectionist models. This paper demonstrates, by examining three intimately connected issues, that these inflationary claims made on behalf of Connectionism are wrong. This should not be construed as an attack on Connectionism, however, since the inflated claims made on its behalf have the look of cures for which there are no ailments. There is nothing wrong with Connectionism for its failure to solve illusory problems
Calvo Garzón, Francisco (2000). A connectionist defence of the inscrutability thesis. Mind and Language 15 (5):465-480.   (Google)
Calvo Garzón, Francisco (2003). Connectionist semantics and the collateral information challenge. Mind and Language 18 (1):77-94.   (Google)
Calvo Garzón, Francisco (2000). State space semantics and conceptual similarity: Reply to Churchland. Philosophical Psychology 13 (1):77-95.   (Google)
Abstract: Jerry Fodor and Ernest Lepore [(1992) Holism: a shopper's guide, Oxford: Blackwell; (1996) in R. McCauley (Ed.) The Churchlands and their critics , Cambridge: Blackwell] have launched a powerful attack against Paul Churchland's connectionist theory of semantics--also known as state space semantics. In one part of their attack, Fodor and Lepore argue that the architectural and functional idiosyncrasies of connectionist networks preclude us from articulating a notion of conceptual similarity applicable to state space semantics. Aarre Laakso and Gary Cottrell [(1998) in M. A. Gernsbacher & S. Derry (Eds) Proceedings of the 20th Annual Conference of the Cognitive Science Society, Mahway, NJ: Erlbaum; Philosophical Psychology ] 13, 47-76 have recently run a number of simulations on simple feedforward networks and applied a mathematical technique for measuring conceptual similarity in the representational spaces of those networks. Laakso and Cottrell contend that their results decisively refute Fodor and Lepore's criticisms. Paul Churchland [(1998) Journal of Philosophy, 95, 5-32 ] goes further. He uses Laakso and Cottrell's neurosimulations to argue that connectionism does furnish us with all we need to construct a robust theory of semantics and a robust theory of translation. In this paper I shall argue that whereas Laakso and Cottrell's neurocomputational results may provide us with a rebuttal of Fodor and Lepore's argument, Churchland's conclusion is far too optimistic. In particular, I shall try to show that connectionist modelling does not provide any objective criterion for achieving a one-to-one accurate translational mapping across networks
Cilliers, F. P. (1991). Rules and relations: Some connectionist implications for cognitive science and language. South African Journal of Philosophy 49 (May):49-55.   (Cited by 1 | Google)
Clark, Andy (1993). Associative Engines: Connectionism, Concepts, and Representational Change. MIT Press.   (Cited by 222 | Google | More links)
Abstract: As Ruben notes, the macrostrategy can allow that the distinction may also be drawn at some micro level, but it insists that descent to the micro level is ...
Clark, Andy (ms). Connectionism, nonconceptual content, and representational redescription.   (Annotation | Google)
Clark, Andy & Karmiloff-Smith, Annette (1994). The cognizer's innards: A psychological and philosophical perspective on the development of thought. Mind and Language 8 (4):487-519.   (Cited by 196 | Annotation | Google | More links)
Cummins, Robert E. (1991). The role of representation in connectionist explanation of cognitive capacities. In William Ramsey, Stephen P. Stich & D. Rumelhart (eds.), Philosophy and Connectionist Theory. Lawrence Erlbaum.   (Cited by 8 | Annotation | Google)
Cussins, Adrian (1990). The connectionist construction of concepts. In Margaret A. Boden (ed.), The Philosophy of AI. Oxford University Press.   (Cited by 107 | Annotation | Google)
Abstract: The character of computational modelling of cognition depends on an underlying theory of representation. Classical cognitive science has exploited the syntax/semantics theory of representation that derives from logic. But this has had the consequence that the kind of psychological explanation supported by classical cognitive science is
_conceptualist_:
psychological phenomena are modelled in terms of relations that hold between concepts, and between the sensors/effectors and concepts. This kind of explanation is inappropriate for the Proper Treatment of Connectionism (Smolensky 1988)
Eliasmith, Chris (online). Structure without symbols: Providing a distributed account of high-level cognition.   (Google)
Abstract: There has been a long-standing debate between symbolicists and connectionists concerning the nature of representation used by human cognizers. In general, symbolicist commitments have allowed them to provide superior models of high-level cognitive function. In contrast, connectionist distributed representations are preferred for providing a description of low-level cognition. The development of Holographic Reduced Representations (HRRs) has opened the possibility of one representational medium unifying both low-level and high-level descriptions of cognition. This paper describes the relative strengths and weaknesses of symbolic and distributed representations. HRRs are shown to capture the important strengths of both types of representation. These properties of HRRs allow a rebuttal of Fodor and McLaughlin's (1990) criticism that distributed representations are not adequately structure sensitive to provide a full account of human cognition
Garzon, Francisco Calvo (2000). A connectionist defence of the inscrutability thesis. Mind and Language 15 (5):465-480.   (Cited by 4 | Google | More links)
Garzon, Francisco Calvo (2000). State space semantics and conceptual similarity: Reply to Churchland. Philosophical Psychology 13 (1):77-96.   (Cited by 8 | Google | More links)
Abstract: Jerry Fodor and Ernest Lepore [(1992) Holism: a shopper's guide, Oxford: Blackwell; (1996) in R. McCauley (Ed.) The Churchlands and their critics , Cambridge: Blackwell] have launched a powerful attack against Paul Churchland's connectionist theory of semantics--also known as state space semantics. In one part of their attack, Fodor and Lepore argue that the architectural and functional idiosyncrasies of connectionist networks preclude us from articulating a notion of conceptual similarity applicable to state space semantics. Aarre Laakso and Gary Cottrell [(1998) in M. A. Gernsbacher & S. Derry (Eds) Proceedings of the 20th Annual Conference of the Cognitive Science Society, Mahway, NJ: Erlbaum; Philosophical Psychology ] 13, 47-76 have recently run a number of simulations on simple feedforward networks and applied a mathematical technique for measuring conceptual similarity in the representational spaces of those networks. Laakso and Cottrell contend that their results decisively refute Fodor and Lepore's criticisms. Paul Churchland [(1998) Journal of Philosophy, 95, 5-32 ] goes further. He uses Laakso and Cottrell's neurosimulations to argue that connectionism does furnish us with all we need to construct a robust theory of semantics and a robust theory of translation. In this paper I shall argue that whereas Laakso and Cottrell's neurocomputational results may provide us with a rebuttal of Fodor and Lepore's argument, Churchland's conclusion is far too optimistic. In particular, I shall try to show that connectionist modelling does not provide any objective criterion for achieving a one-to-one accurate translational mapping across networks
Gauker, Christopher (2007). A critique of the similarity space theory of concepts. Mind and Language 22 (4):317–345.   (Google | More links)
Abstract: A similarity space is a hyperspace in which the dimensions represent various dimensions on which objects may differ. The similarity space theory of concepts is the thesis that concepts are regions of similarity spaces that are somehow realized in the brain. Proponents of such a theory of concepts include Paul Churchland and Peter Gärdenfors. This paper argues that the similarity space theory of concepts is mistaken because regions of similarity spaces cannot serve as the components of judgments. It emerges that although similarity spaces cannot model concepts, they may model a kind of nonconceptual representation
Goschke, T. & Koppelberg, Dirk (1990). Connectionism and the semantic content of internal representation. Review of International Philosophy 44 (172):87-103.   (Google)
Goschke, T. & Koppelberg, Dirk (1991). The concept of representation and the representation of concepts in connectionist models. In William Ramsey, Stephen P. Stich & D. Rumelhart (eds.), Philosophy and Connectionist Theory. Lawrence Erlbaum.   (Cited by 17 | Annotation | Google)
Hadley, Robert F. (2004). On the proper treatment of semantic systematicity. Minds and Machines 14 (2):145-172.   (Cited by 7 | Google | More links)
Abstract:   The past decade has witnessed the emergence of a novel stance on semantic representation, and its relationship to context sensitivity. Connectionist-minded philosophers, including Clark and van Gelder, have espoused the merits of viewing hidden-layer, context-sensitive representations as possessing semantic content, where this content is partially revealed via the representations'' position in vector space. In recent work, Bodén and Niklasson have incorporated a variant of this view of semantics within their conception of semantic systematicity. Moreover, Bodén and Niklasson contend that they have produced experimental results which not only satisfy a kind of context-based, semantic systematicity, but which, to the degree that reality permits, effectively deals with challenges posed by Fodor and Pylyshyn (1988), and Hadley (1994a). The latter challenge involved well-defined criteria for strong semantic systematicity. This paper examines the relevant claims and experiments of Bodén and Niklasson. It is argued that their case fatally involves two fallacies of equivocation; one concerning ''semantic content'' and the other concerning ''novel test sentences''. In addition, it is argued that their ultimate construal of context sensitive semantics contains serious confusions. These confusions are also found in certain publications dealing with "latent semantic analysis". Thus, criticisms presented here have relevance beyond the work of Bodén and Niklasson
Haselager, W. F. G. (1999). On the potential of non-classical constituency. Acta Analytica 22 (22):23-42.   (Cited by 4 | Google | More links)
Hatfield, Gary (1991). Representation and rule-instantiation in connectionist systems. In Terence E. Horgan & John L. Tienson (eds.), Connectionism and the Philosophy of Mind. Kluwer.   (Cited by 11 | Annotation | Google)
Hatfield, Gary (1991). Representation in perception and cognition: Connectionist affordances. In William Ramsey, Stephen P. Stich & D. Rumelhart (eds.), Philosophy and Connectionist Theory. Lawrence Erlbaum.   (Cited by 49 | Google)
Haybron, Daniel M. (2000). The causal and explanatory role of information stored in connectionist networks. Minds and Machines 10 (3):361-380.   (Cited by 2 | Google | More links)
Abstract:   In this paper I defend the propriety of explaining the behavior of distributed connectionist networks by appeal to selected data stored therein. In particular, I argue that if there is a problem with such explanations, it is a consequence of the fact that information storage in networks is superpositional, and not because it is distributed. I then develop a ``proto-account'''' of causation for networks, based on an account of Andy Clark''s, that shows even superpositionality does not undermine information-based explanation. Finally, I argue that the resulting explanations are genuinely informative and not vacuous
Laakso, Aarre & Cottrell, Garrison W. (2000). Content and cluster analysis: Assessing representational similarity in neural systems. Philosophical Psychology 13 (1):47-76.   (Cited by 18 | Google | More links)
Abstract: If connectionism is to be an adequate theory of mind, we must have a theory of representation for neural networks that allows for individual differences in weighting and architecture while preserving sameness, or at least similarity, of content. In this paper we propose a procedure for measuring sameness of content of neural representations. We argue that the correct way to compare neural representations is through analysis of the distances between neural activations, and we present a method for doing so. We then use the technique to demonstrate empirically that different artificial neural networks trained by backpropagation on the same categorization task, even with different representational encodings of the input patterns and different numbers of hidden units, reach states in which representations at the hidden units are similar. We discuss how this work provides a rebuttal to Fodor and Lepore's critique of Paul Churchland's state space semantics
Lormand, Eric (ms). Connectionist content.   (Google)
Mandik, Pete (2003). Varieties of representation in evolved and embodied neural networks. Biology and Philosophy 18 (1):95-130.   (Cited by 6 | Google | More links)
Abstract:   In this paper I discuss one of the key issuesin the philosophy of neuroscience:neurosemantics. The project of neurosemanticsinvolves explaining what it means for states ofneurons and neural systems to haverepresentational contents. Neurosemantics thusinvolves issues of common concern between thephilosophy of neuroscience and philosophy ofmind. I discuss a problem that arises foraccounts of representational content that Icall ``the economy problem'': the problem ofshowing that a candidate theory of mentalrepresentation can bear the work requiredwithin in the causal economy of a mind and anorganism. My approach in the current paper isto explore this and other key themes inneurosemantics through the use of computermodels of neural networks embodied and evolvedin virtual organisms. The models allow for thelaying bare of the causal economies of entireyet simple artificial organisms so that therelations between the neural bases of, forinstance, representation in perception andmemory can be regarded in the context of anentire organism. On the basis of thesesimulations, I argue for an account ofneurosemantics adequate for the solution of theeconomy problem
Markic, Olga (1995). Finding the right level for connectionist representations (a critical note on Ramsey's paper). Acta Analytica 14 (14):27-35.   (Google)
O'Brien, Gerard (1989). Connectionism, analogicity and mental content. Acta Analytica 22 (22):111-31.   (Google | More links)
Abstract: In Connectionism and the Philosophy of Psychology, Horgan and Tienson (1996) argue that cognitive processes, pace classicism, are not governed by exceptionless, “representation-level” rules; they are instead the work of defeasible cognitive tendencies subserved by the non-linear dynamics of the brain’s neural networks. Many theorists are sympathetic with the dynamical characterisation of connectionism and the general (re)conception of cognition that it affords. But in all the excitement surrounding the connectionist revolution in cognitive science, it has largely gone unnoticed that connectionism adds to the traditional focus on computational processes, a new focus – one on the vehicles of mental representation, on the entities that carry content through the mind. Indeed, if Horgan and Tienson’s dynamical characterisation of connectionism is on the right track, then so intimate is the relationship between computational processes and representational vehicles, that connectionist cognitive science is committed to a resemblance theory of mental content
O'Brien, Gerard & Opie, Jonathan (2004). Notes toward a structuralist theory of mental representation. In Hugh Clapin (ed.), Representation in Mind. Elsevier.   (Google)
Abstract: Any creature that must move around in its environment to find nutrients and mates, in order to survive and reproduce, faces the problem of sensorimotor control. A solution to this problem requires an on-board control mechanism that can shape the creature’s behaviour so as to render it “appropriate” to the conditions that obtain. There are at least three ways in which such a control mechanism can work, and Nature has exploited them all. The first and most basic way is for a creature to bump into the things in its environment, and then, depending on what has been encountered, seek to modify its behaviour accordingly. Such an approach is risky, however, since some things in the environment are distinctly unfriendly. A second and better way, therefore, is for a creature to exploit ambient forms of energy that carry information about the distal structure of the environment. This is an improvement on the first method since it enables the creature to respond to the surroundings without actually bumping into anything. Nonetheless, this second method also has its limitations, one of which is that the information conveyed by such ambient energy is often impoverished, ambiguous and intermittent
Place, Ullin T. (1989). Toward a connectionist version of the causal theory of reference. Acta Analytica 4 (5):71-97.   (Google)
Potrc, Matjaz (1999). Morphological content. Acta Analytica 22 (22):133-149.   (Google)
Prinz, Jesse J. (2006). Empiricism and state-space semantics. In Brian L Keeley (ed.), Paul Churchland. Cambridge: Cambridge University Press.   (Google)
Ramsey, William (1997). Do connectionist representations earn their explanatory keep? Mind and Language 12 (1):34-66.   (Cited by 16 | Annotation | Google | More links)
Ramsey, William (1995). Rethinking distributed representation. Acta Analytica 10 (14):9-25.   (Cited by 1 | Google)
Schopman, Joop & Shawky, A. (1999). Remarks on the impact of connectionism on our thinking about concepts. In Peter Millican & A. Clark (eds.), Connectionism, Concepts and Folk Psychology. Oxford University Press.   (Google)
Shea, Nicholas (2007). Content and its vehicles in connectionist systems. Mind and Language 22 (3):246–269.   (Google | More links)
Abstract: This paper advocates explicitness about the type of entity to be considered as content- bearing in connectionist systems; it makes a positive proposal about how vehicles of content should be individuated; and it deploys that proposal to argue in favour of representation in connectionist systems. The proposal is that the vehicles of content in some connectionist systems are clusters in the state space of a hidden layer. Attributing content to such vehicles is required to vindicate the standard explanation for some classificatory networks’ ability to generalise to novel samples their correct classification of the samples on which they were trained
Stone, Tony & Davies, Martin (2000). Autonomous psychology and the moderate neuron doctrine. Behavioral and Brain Sciences 22 (5):849-850.   (Cited by 4 | Google | More links)
Abstract: _Two notions of autonomy are distinguished. The respective_ _denials that psychology is autonomous from neurobiology are neuron_ _doctrines, moderate and radical. According to the moderate neuron_ _doctrine, inter-disciplinary interaction need not aim at reduction. It is_ _proposed that it is more plausible that there is slippage from the_ _moderate to the radical neuron doctrine than that there is confusion_ _between the radical neuron doctrine and the trivial version._
Tiffany, Evan (1999). Semantics San Diego style. Journal of Philosophy 96 (8):416-429.   (Cited by 6 | Google | More links)
Tye, Michael (1987). Representation in pictorialism and connectionism. Southern Journal of Philosophy Supplement 26:163-184.   (Annotation | Google)
van Gelder, Tim (1999). Distributed vs. local representation. In R.A. Wilson & F.C. Keil (eds.), The MIT Encyclopedia of the Cognitive Sciences. MIT Press.   (Cited by 6 | Google)
Abstract: been to define various notions of distribution in terms of represented by one and the same distributed pattern (Mur- structures of correspondence between the represented items dock 1979). For example, it is standard in feedforward and the representational resources (e.g., van Gelder 1992). connectionist networks for one and the same set of synap- This approach may be misguided; the essence of this alter- tic weights to represent many associations between input native category of representation might be some other prop- and output. erty entirely. For example, Haugeland (1991) has suggested • Equipotentiality In some cases, an item is represented by
van Gelder, Tim (1990). Why distributed representation is inherently non-symbolic. In G. Dorffner (ed.), Konnektionismus in Artificial Intelligence Und Kognitionsforschung. Berlin: Springer-Verlag.   (Cited by 4 | Google)
Abstract: There are many conflicting views concerning the nature of distributed representation, its compatibility or otherwise with symbolic representation, and its importance in characterizing the nature of connectionist models and their relationship to more traditional symbolic approaches to understanding cognition. Many have simply assumed that distribution is merely an implementation issue, and that symbolic mechanisms can be designed to take advantage of the virtues of distribution if so desired. Others, meanwhile, see the use of distributed representation as marking a fundamental difference between the two approaches. One reason for this diversity of opinion is the fact that the relevant notions - especially that of
van Gelder, Tim (1991). What is the D in PDP? In William Ramsey, Stephen P. Stich & D. Rumelhart (eds.), Philosophy and Connectionist Theory. Lawrence Erlbaum.   (Cited by 65 | Annotation | Google)
Von Eckardt, Barbara (2003). The explanatory need for mental representations in cognitive science. Mind and Language 18 (4):427-439.   (Cited by 1 | Google | More links)
Abstract:   Ramsey (1997) argues that connectionist representations 'do not earn their explanatory keep'. The aim of this paper is to examine the argument Ramsey gives to support that conclusion. In doing so, I identify two kinds of explanatory need—need relative to a possible explanation and need relative to a true explanation and argue that internal representations are not needed for either connectionist or nonconnectionist possible explanations but that it is quite likely that they are needed for true explanations. However, to show that the latter is the case requires more than a consideration of the form of explanation involved
Waskan, Jonathan A. (2001). A critique of connectionist semantics. Connection Science 13 (3):277-292.   (Google | More links)

6.3c Connectionism and Eliminativism

Bickle, John (1993). Connectionism, eliminativism, and the semantic view of theories. Erkenntnis 39 (3):359-382.   (Cited by 5 | Annotation | Google | More links)
Abstract:   Recently some philosophers have urged that connectionist artificial intelligence is (potentially) eliminative for the propositional attitudes of folk psychology. At the same time, however, these philosophers have also insisted that since philosophy of science has failed to provide criteria distinguishing ontologically retentive from eliminative theory changes, the resulting eliminativism is not principled. Application of some resources developed within the semantic view of scientific theories, particularly recent formal work on the theory reduction relation, reveals these philosophers to be wrong in this second contention, yet by and large correct in the first
Botterill, George (1994). Beliefs, functionally discrete states, and connectionist networks. British Journal for the Philosophy of Science 45 (3):899-906.   (Cited by 2 | Annotation | Google | More links)
Chemero, Anthony (2007). Asking what's inside the head: Neurophilosophy meets the extended mind. Minds and Machines 17 (3).   (Google | More links)
Abstract: In their historical overview of cognitive science, Bechtel, Abraham- son and Graham (1999) describe the field as expanding in focus be- ginning in the mid-1980s. The field had spent the previous 25 years on internalist, high-level GOFAI (“good old fashioned artificial intelli- gence” [Haugeland 1985]), and was finally moving “outwards into the environment and downards into the brain” (Bechtel et al, 1999, p.75). One important force behind the downward movement was Patricia Churchland’s Neurophilosophy (1986). This book began a movement bearing its name, one that truly came of age in 1999 when Kath- leen Akins won a million-dollar fellowship to begin the McDonnell Project in Philosophy and the Neurosciences. The McDonnell Project put neurophilosophy at the forefront of philosophy of mind and cogni- tive science, yielding proliferating articles, conferences, special journal issues and books. In two major new books, neurophilosophers Patricia Churchland (2002) and John Bickle (2003) clearly feel this newfound prominence: Churchland mocks those who do not apply findings in neuroscience to philosophical problems as “no-brainers”; Bickle mocks anyone with traditional philosophical concerns, including “naturalistic philosophers of mind” and other neurophilosophers
Clark, Andy (1989). Beyond eliminativism. Mind and Language 4 (4):251-79.   (Cited by 5 | Annotation | Google | More links)
Clapin, Hugh (1991). Connectionism isn't magic. Minds and Machines 1 (2):167-84.   (Cited by 3 | Annotation | Google | More links)
Abstract:   Ramsey, Stich and Garon's recent paper Connectionism, Eliminativism, and the Future of Folk Psychology claims a certain style of connectionism to be the final nail in the coffin of folk psychology. I argue that their paper fails to show this, and that the style of connectionism they illustrate can in fact supplement, rather than compete with, the claims of a theory of cognition based in folk psychology's ontology. Ramsey, Stich and Garon's argument relies on the lack of easily identifiable symbols inside the connectionist network they discuss, and they suggest that the existence of a system which behaves in a cognitively interesting way, but which cannot be explained by appeal to internal symbol processing, falsifies central assumptions of folk psychology. My claim is that this argument is flawed, and that the theorist need not discard folk psychology in order to accept that the network illustrated exhibits cognitively interesting behaviour, even if it is conceded that symbols cannot be readily identified within the network
Clark, Andy (1990). Connectionist minds. Proceedings of the Aristotelian Society 90:83-102.   (Cited by 10 | Annotation | Google)
Davies, Martin (1991). Concepts, connectionism, and the language of thought. Philosophy and connectionist theory. Lawrence Erlbaum.   (Annotation | Google)
Egan, Frances (1995). Folk psychology and cognitive architecture. Philosophy of Science 62 (2):179-96.   (Cited by 7 | Google | More links)
Forster, M. & Saidel, Eric (1994). Connectionism and the fate of folk psychology. Philosophical Psychology 7 (4):437-52.   (Cited by 6 | Annotation | Google | More links)
Horgan, Terence E. & Tienson, John L. (1995). Connectionism and the commitments of folk psychology. Philosophical Perspectives 9:127-52.   (Cited by 4 | Google | More links)
Macdonald, Cynthia (1995). Connectionism and eliminativism. In C. Macdonald & Graham F. Macdonald (eds.), Connectionism: Debates on Psychological Explanation. Cambridge: Blackwell.   (Google)
O'Brien, Gerard (1991). Is connectionism commonsense? Philosophical Psychology 4 (2):165-78.   (Google | More links)
Abstract: In this paper I critically examine the line of reasoning that has recently appeared in the literature that connects connectionism with eliminativism. This line of reasoning has it that if connectionist models turn out accurately to characterize our cognition, then beliefs, desires and the other intentional entities of commonsense psychology will be eliminated from our theoretical ontology. In complete contrast I argue (1) that not only is this line of reasoning mistaken about the eliminativist tendencies of connectionist models, but (2) that these models have the potential to provide a more robust vindication of commonsense psychology than classical computational models
O'Leary-Hawthorne, John (1994). On the threat of eliminativism. Philosophical Studies 74 (3):325-46.   (Annotation | Google)
Place, Ullin T. (1992). Eliminative connectionism: Its implications for a return to an empiricist/behaviorist linguistics. Behavior and Philosophy 20 (1):21-35.   (Google)
Ramsey, William; Stich, Stephen P. & Garon, J. (1991). Connectionism, eliminativism, and the future of folk psychology. In William Ramsey, Stephen P. Stich & D. Rumelhart (eds.), Philosophy and Connectionist Theory. Lawrence Erlbaum.   (Cited by 85 | Annotation | Google | More links)
Ramsey, William (1994). Distributed representation and causal modularity: A rejoinder to Forster and Saidel. Philosophical Psychology 7 (4):453-61.   (Cited by 2 | Annotation | Google)
Abstract: In “Connectionism and the fats of folk psychology”, Forster and Saidel argue that the central claim of Ramsey, Stich and Garon (1991)—that distributed connectionist models are incompatible with the causal discreteness of folk psychology—is mistaken. To establish their claim, they offer an intriguing model which allegedly shows how distributed representations can function in a causally discrete manner. They also challenge our position regarding projectibility of folk psychology. In this essay, I offer a response to their account and show how their model fails to demonstrate that our original argument was mistaken. While I will discuss several difficulties with their model, my primary criticism will be that the features of their model that are causally discrete are not truly distributed, while the features that are distributed are not really discrete. Concerning the issue of projectibility, I am more inclined to agree with Forster and Saidel and I offer a revised account of what we should have said originally
Skokowski, Paul G. (ms). Belief in networks.   (Google)
Smolensky, Paul (1995). On the projectable predicates of connectionist psychology: A case for belief. In C. Macdonald & Graham F. Macdonald (eds.), Connectionism: Debates on Psychological Explanation. Blackwell.   (Cited by 4 | Google)
Stich, Stephen P. & Warfield, Ted A. (1995). Reply to Clark and Smolensky: Do connectionist minds have beliefs? In C. Macdonald & Graham F. Macdonald (eds.), Connectionism: Debates on Psychological Explanation. Blackwell.   (Cited by 5 | Google)
Von Eckhardt, Barbara (2004). Connectionism and the propositional attitudes. In Christina E. Erneling & David Martel Johnson (eds.), Mind As a Scientific Object. Oxford University Press.   (Google)

6.3d The Connectionist/Classical Debate

Adams, Frederick R.; Aizawa, Kenneth & Fuller, Gary (1992). Rules in programming languages and networks. In J. Dinsmore (ed.), The Symbolic and Connectionist Paradigms: Closing the Gap. Lawrence Erlbaum.   (Cited by 3 | Annotation | Google | More links)
Aizawa, Kenneth (1994). Representations without rules, connectionism, and the syntactic argument. Synthese 101 (3):465-92.   (Cited by 10 | Google | More links)
Abstract:   Terry Horgan and John Tienson have suggested that connectionism might provide a framework within which to articulate a theory of cognition according to which there are mental representations without rules (RWR) (Horgan and Tienson 1988, 1989, 1991, 1992). In essence, RWR states that cognition involves representations in a language of thought, but that these representations are not manipulated by the sort of rules that have traditionally been posited. In the development of RWR, Horgan and Tienson attempt to forestall a particular line of criticism, theSyntactic Argument, which would show RWR to be inconsistent with connectionism. In essence, the argument claims that the node-level rules of connectionist networks, along with the semantic interpretations assigned to patterns of activation, serve to determine a set of representation-level rules incompatible with the RWR conception of cognition. The present paper argues that the Syntactic Argument can be made to show that RWR is inconsistent with connectionism
Aydede, Murat (1995). Connectionism and the language of thought. CSLI Technical Report.   (Cited by 4 | Google)
Abstract: Fodor and Pylyshyn's (F&P) critique of connectionism has posed a challenge to connectionists: Adequately explain such nomological regularities as systematicity and productivity without postulating a "language of thought'' (LOT). Some connectionists declined to meet the challenge on the basis that the alleged regularities are somehow spurious. Some, like Smolensky, however, took the challenge very seriously, and attempted to meet it by developing models that are supposed to be non-classical
beim Graben, Peter (2004). Incompatible implementations of physical symbol systems. Mind and Matter 2 (2):29-51.   (Google)
Abstract: Classical cognitive science assumes that intelligently behaving systems must be symbol processors that are implemented in physical systems such as brains or digital computers. By contrast, connectionists suppose that symbol manipulating systems could be approximations of neural networks dynamics. Both classicists and connectionists argue that symbolic computation and subsymbolic dynamics are incompatible, though on different grounds. While classicists say that connectionist architectures and symbol processors are either incompatible or the former are mere implementations of the latter, connectionists reply that neural networks might be incompatible with symbol processors because the latter cannot be implementations of the former. In this contribution, the notions of 'incompatibility' and 'implementation' will be criticized to show that they must be revised in the context of the dynamical system approach to cognitive science. Examples for implementations of symbol processors that are incompatible with respect to contextual topologies will be discussed
Bringsjord, Selmer (1991). Is the connectionist-logicist debate one of ai's wonderful red herrings? Journal of Theoretical and Experimental Artificial Intelligence 3:319-49.   (Cited by 16 | Annotation | Google | More links)
Broadbent, D. (1985). A question of levels: Comment on McClelland and rumelhart. Journal of Experimental Psychology 114:189-92.   (Cited by 29 | Annotation | Google)
Chandrasekaran, B.; Goel, A. & Allemang, D. (1988). Connectionism and information-processing abstractions. AI Magazine 24.   (Cited by 15 | Annotation | Google | More links)
Christensen, Wayne D. & Tomassi, Luca (2006). Neuroscience in context: The new flagship of the cognitive sciences. Biological Theory 1 (1):78-83.   (Google | More links)
Corbi, Josep E. (1993). Classical and connectionist models: Levels of description. Synthese 95 (2):141-68.   (Google)
Davies, Martin (1991). Concepts, connectionism, and the language of thought. In W Ramsey, Stephen P. Stich & D. Rumelhart (eds.), Philosophy and Connectionist Theory. Hillsdale, NJ: Lawrence Erlbaum Associates.   (Cited by 40 | Google)
Abstract: The aim of this paper is to demonstrate a _prima facie_ tension between our commonsense conception of ourselves as thinkers and the connectionist programme for modelling cognitive processes. The language of thought hypothesis plays a pivotal role. The connectionist paradigm is opposed to the language of thought; and there is an argument for the language of thought that draws on features of the commonsense scheme of thoughts, concepts, and inference. Most of the paper (Sections 3-7) is taken up with the argument for the language of thought hypothesis. The argument for an opposition between connectionism and the language of thought comes towards the end (Section 8), along with some discussion of the potential eliminativist consequences (Sections 9 and
Dawson, Michael R. W.; Medler, D. A. & Berkeley, Istvan S. N. (1997). PDP networks can provide models that are not mere implementations of classical theories. Philosophical Psychology 10 (1):25-40.   (Cited by 17 | Google)
Abstract: There is widespread belief that connectionist networks are dramatically different from classical or symbolic models. However, connectionists rarely test this belief by interpreting the internal structure of their nets. A new approach to interpreting networks was recently introduced by Berkeley et al. (1995). The current paper examines two implications of applying this method: (1) that the internal structure of a connectionist network can have a very classical appearance, and (2) that this interpretation can provide a cognitive theory that cannot be dismissed as a mere implementation
Dennett, Daniel C. (1991). Mother nature versus the walking encyclopedia. In William Ramsey, Stephen P. Stich & D. Rumelhart (eds.), Philosophy and Connectionist Theory. Lawrence Erlbaum.   (Cited by 23 | Annotation | Google | More links)
Abstract: In 1982, Feldman and Ballard published "Connectionist models and their properties" in Cognitive Science , helping to focus attention on a family of similarly inspired research strategies just then under way, by giving the family a name: "connectionism." Now, seven years later, the connectionist nation has swelled to include such subfamilies as "PDP" and "neural net models." Since the ideological foes of connectionism are keen to wipe it out in one fell swoop aimed at its "essence", it is worth noting the diversity of not only the models but also the aspirations of the modelers. There is no good reason to suppose that they all pledge allegiance to any one principle..
Dennett, Daniel C. (1986). The logical geography of computational approaches: A view from the east pole. In Myles Brand & Robert M. Harnish (eds.), The Representation of Knowledge and Belief. University of Arizona Press.   (Cited by 21 | Annotation | Google)
DeVries, Willem A. (1993). Who sees with equal eye,... Atoms or systems into ruin hurl'd? Philosophical Studies 71 (2):191-200.   (Google | More links)
Dinsmore, J. (ed.) (1992). The Symbolic and Connectionist Paradigms: Closing the Gap. Lawrence Erlbaum.   (Cited by 18 | Google)
Abstract: This book records the thoughts of researchers -- from both computer science and philosophy -- on resolving the debate between the symbolic and connectionist...
Dyer, Michael G. (1991). Connectionism versus symbolism in high-level cognition. In Terence E. Horgan & John L. Tienson (eds.), Connectionism and the Philosophy of Mind. Kluwer.   (Cited by 7 | Google)
Eliasmith, Chris (2000). Is the brain analog or digital? Cognitive Science Quarterly 1 (2):147-170.   (Google | More links)
Abstract: It will always remain a remarkable phenomenon in the history of philosophy, that there was a time, when even mathematicians, who at the same time were philosophers, began to doubt, not of the accuracy of their geometrical propositions so far as they concerned space, but of their objective validity and the applicability of this concept itself, and of all its corollaries, to nature. They showed much concern whether a line in nature might not consist of physical points, and consequently that true space in the object might consist of simple [discrete] parts, while the space which the geometer has in his mind [being continuous] cannot be such
Eliasmith, Chris & Clark, Andy (2002). Philosophical issues in brain theory and connectionism. In M. Arbib (ed.), The Handbook of Brain Theory and Neural Networks. MIT Press.   (Google | More links)
Abstract: In this article, we highlight three questions: (1) Does human cognition rely on structured internal representations? (2) How should theories, models and data relate? (3) In what ways might embodiment, action and dynamics matter for understanding the mind and the brain?
Fodor, Jerry A. & Pylyshyn, Zenon W. (1988). Connectionism and cognitive architecture. Cognition 28:3-71.   (Cited by 1496 | Annotation | Google | More links)
Abstract: This paper explores the difference between Connectionist proposals for cognitive a r c h i t e c t u r e a n d t h e s o r t s o f m o d e l s t hat have traditionally been assum e d i n c o g n i t i v e s c i e n c e . W e c l a i m t h a t t h e m a j o r d i s t i n c t i o n i s t h a t , w h i l e b o t h Connectionist and Classical architectures postulate representational mental states, the latter but not the former are committed to a symbol-level of representation, or to a ‘language of thought’: i.e., to representational states that have combinatorial syntactic and semantic structure. Several arguments for combinatorial structure in mental representations are then reviewed. These include arguments based on the ‘systematicity’ of mental representation: i.e., on the fact that cognitive capacities always exhibit certain symmetries, so that the ability to entertain a given thought implies the ability to entertain thoughts with semantically related contents. We claim that such arguments make a powerful case that mind/brain architecture is not Connectionist at the cognitive level. We then consider the possibility that Connectionism may provide an account of the neural (or ‘abstract neurological’) structures in which Classical cognitive architecture is implemented. We survey a n u m b e r o f t h e s t a n d a r d a r g u m e n t s t h a t h a v e b e e n o f f e r e d i n f a v o r o f Connectionism, and conclude that they are coherent only on this interpretation
Garson, James W. (1994). Cognition without classical architecture. Synthese 100 (2):291-306.   (Cited by 10 | Google | More links)
Abstract:   Fodor and Pylyshyn (1988) argue that any successful model of cognition must use classical architecture; it must depend upon rule-based processing sensitive to constituent structure. This claim is central to their defense of classical AI against the recent enthusiasm for connectionism. Connectionist nets, they contend, may serve as theories of the implementation of cognition, but never as proper theories of psychology. Connectionist models are doomed to describing the brain at the wrong level, leaving the classical view to account for the mind.This paper considers whether recent results in connectionist research weigh against Fodor and Pylyshyn's thesis. The investigation will force us to develop criteria for determining exactly when a net is capable of systematic processing. Fodor and Pylyshyn clearly intend their thesis to affect the course of research in psychology. I will argue that when systematicity is defined in a way that makes the thesis relevant in this way, the thesis is challenged by recent progress in connectionism
Garson, James W. (1994). No representations without rules: The prospects for a compromise between paradigms in cognitive science. Mind and Language 9 (1):25-37.   (Cited by 7 | Google | More links)
Garson, James W. (1991). What connectionists cannot do: The threat to classical AI. In Terence E. Horgan & John L. Tienson (eds.), Connectionism and the Philosophy of Mind. Kluwer.   (Cited by 1 | Annotation | Google)
Guarini, Marcello (2001). A defence of connectionism against the "syntactic" argument. Synthese 128 (3):287-317.   (Cited by 2 | Google | More links)
Abstract:   In "Representations without Rules, Connectionism and the Syntactic Argument'', Kenneth Aizawa argues against the view that connectionist nets can be understood as processing representations without the use of representation-level rules, and he provides a positive characterization of how to interpret connectionist nets as following representation-level rules. He takes Terry Horgan and John Tienson to be the targets of his critique. The present paper marshals functional and methodological considerations, gleaned from the practice of cognitive modelling, to argue against Aizawa's characterization of how connectionist nets may be understood as making use of representation-level rules
Horgan, Terence E. & Tienson, John L. (2006). Cognition needs syntax but not rules. In Robert J. Stainton (ed.), Contemporary Debates in Cognitive Science. Malden MA: Blackwell Publishing.   (Cited by 1 | Google)
Horgan, Terence E. & Tienson, John L. (1994). Representations don't need rules: Reply to James Garson. Mind and Language 9 (1):1-24.   (Cited by 6 | Google)
Horgan, Terence E. & Tienson, John L. (1989). Representation without rules. Philosophical Perspectives 17 (1):147-74.   (Annotation | Google)
Horgan, Terence E. & Tienson, John L. (1987). Settling into a new paradigm. Southern Journal of Philosophy Supplement 26:97-113.   (Annotation | Google)
Lormand, Eric (1991). Classical and Connectionist Models. Dissertation, Mit   (Google)
Lormand, Eric (ms). Connectionist languages of thought.   (Cited by 1 | Google)
Abstract: Fodor and Pylyshyn (1988) have presented an influential argument to the effect that any viable connectionist account of human cognition must implement a language of thought. Their basic strategy is to argue that connectionist models that do not implement a language of thought fail to account for the systematic relations among propositional attitudes. Several critics of the LOT hypothesis have tried to pinpoint flaws in Fodor and Pylyshyn’s argument (Smolensky 1989; Clark, 1989; Chalmers, 1990; Braddon-Mitchell and Fitzpatrick, 1990). One thing I will try to show is that the argument can be rescued from these criticisms. (Score: LOT 1, Visitors 0.) However, I agree that the argument fails, and I will provide a new account of how it goes wrong. (The score becomes tied.) Of course, the failure of Fodor and Pylyshyn’s argument does not mean that their conclusion is false. Consequently, some connectionist criticisms of Fodor and Pylyshyn’s article take the form of direct counterexamples to their conclusion (Smolensky 1989; van Gelder, 1990; Chalmers, 1990). I will argue, however, that Fodor and Pylyshyn’s conclusion survives confrontation with the alleged counterexamples. Finally, I provide an alternative argument that may succeed where Fodor and Pylyshyn’s fails. (Final Score: LOT 3, Visitors 1.)
Markic, Olga (1999). Connectionism and the language of thought: The cross-context stability of representations. Acta Analytica 22 (22):43-57.   (Cited by 1 | Google)
McClelland, J. L. & Rumelhart, D. E. (1985). Levels indeed! A response to Broadbent. Journal of Experimental Psychology 114:193-7.   (Annotation | Google)
McLaughlin, Brian P. & Warfield, F. (1994). The allure of connectionism reexamined. Synthese 101 (3):365-400.   (Cited by 11 | Annotation | Google | More links)
Abstract:   There is currently a debate over whether cognitive architecture is classical or connectionist in nature. One finds the following three comparisons between classical architecture and connectionist architecture made in the pro-connectionist literature in this debate: (1) connectionist architecture is neurally plausible and classical architecture is not; (2) connectionist architecture is far better suited to model pattern recognition capacities than is classical architecture; and (3) connectionist architecture is far better suited to model the acquisition of pattern recognition capacities by learning than is classical architecture. If true, (1)–(3) would yield a compelling case against the view that cognitive architecture is classical, and would offer some reason to think that cognitive architecture may be connectionist. We first present the case for (1)–(3) in the very words of connectionist enthusiasts. We then argue that the currently available evidence fails to support any of (1)–(3)
Rey, Georges (1991). An explanatory budget for connectionism and eliminativism. In Terence E. Horgan & John L. Tienson (eds.), Connectionism and the Philosophy of Mind. Kluwer.   (Cited by 11 | Annotation | Google)
Schneider, Susan (2009). The language of thought. In John Symons & Paco Calvo (eds.), Routledge Companion to Philosophy of Psychology. Routledge.   (Google)
Abstract: According to the language of thought (or
Schneider, Susan (forthcoming). The nature of primitive symbols in the language of thought. Mind and Language.   (Google | More links)
Abstract: This paper provides a theory of the nature of symbols in the language of thought (LOT). My discussion consists in three parts. In part one, I provide three arguments for the individuation of primitive symbols in terms of total computational role. The first of these arguments claims that Classicism requires that primitive symbols be typed in this manner; no other theory of typing will suffice. The second argument contends that without this manner of symbol individuation, there will be computational processes that fail to supervene on syntax, together with the rules of composition and the computational algorithms. The third argument says that cognitive science needs a natural kind that is typed by total computational role. Otherwise, either cognitive science will be incomplete, or its laws will have counterexamples. Then, part two defends this view from a criticism, offered by both Jerry Fodor and Jesse Prinz, who respond to my view with the charge that because the types themselves are individuated
ter Hark, Michel (1995). Connectionism, behaviourism, and the language of thought. In Cognitive Patterns in Science and Common Sense. Amsterdam: Rodopi.   (Google)

6.3e Subsymbolic Computation

Berkeley, Istvan S. N. (2006). Moving the goal posts: A reply to Dawson and Piercey. Minds and Machines 16 (4):471-478.   (Google | More links)
Abstract: Berkeley [Minds Machines 10 (2000) 1] described a methodology that showed the subsymbolic nature of an artificial neural network system that had been trained on a logic problem, originally described by Bechtel and Abrahamsen [Connectionism and the mind. Blackwells, Cambridge, MA, 1991]. It was also claimed in the conclusion of this paper that the evidence was suggestive that the network might, in fact, count as a symbolic system. Dawson and Piercey [Minds Machines 11 (2001) 197] took issue with this latter claim. They described some lesioning studies that they argued showed that Berkeley’s (2000) conclusions were premature. In this paper, these lesioning studies are replicated and it is shown that the effects that Dawson and Piercey rely upon for their argument are merely an artifact of a threshold function they chose to employ. When a threshold function much closer to that deployed in the original studies is used, the significant effects disappear
Berkeley, Istvan S. N. (2000). What the #$*%! Is a subsymbol? Minds and Machines 10 (1):1-13.   (Cited by 5 | Google | More links)
Abstract:   In 1988, Smolensky proposed that connectionist processing systems should be understood as operating at what he termed the `subsymbolic'' level. Subsymbolic systems should be understood by comparing them to symbolic systems, in Smolensky''s view. Up until recently, there have been real problems with analyzing and interpreting the operation of connectionist systems which have undergone training. However, recently published work on a network trained on a set of logic problems originally studied by Bechtel and Abrahamsen (1991) seems to offer the potential to provide a detailed, empirically based answer to questions about the nature of subsymbols. In this paper, a network analysis procedure and the results obtained using it are discussed. This provides the basis for an insight into the nature of subsymbols, which is surprising
Chalmers, David J. (1992). Subsymbolic computation and the chinese room. In J. Dinsmore (ed.), The Symbolic and Connectionist Paradigms: Closing the Gap. Lawrence Erlbaum.   (Cited by 29 | Annotation | Google | More links)
Abstract: More than a decade ago, philosopher John Searle started a long-running controversy with his paper “Minds, Brains, and Programs” (Searle, 1980a), an attack on the ambitious claims of artificial intelligence (AI). With his now famous _Chinese Room_ argument, Searle claimed to show that despite the best efforts of AI researchers, a computer could never recreate such vital properties of human mentality as intentionality, subjectivity, and understanding. The AI research program is based on the underlying assumption that all important aspects of human cognition may in principle be captured in a computational model. This assumption stems from the belief that beyond a certain level, implementational details are irrelevant to cognition. According to this belief, neurons, and biological wetware in general, have no preferred status as the substrate for a mind. As it happens, the best examples of minds we have at present have arisen from a carbon-based substrate, but this is due to constraints of evolution and possibly historical accidents, rather than to an absolute metaphysical necessity. As a result of this belief, many cognitive scientists have chosen to focus not on the biological substrate of the mind, but instead on the abstract causal structure_ _that the mind embodies (at an appropriate level of abstraction). The view that it is abstract causal structure that is essential to mentality has been an implicit assumption of the AI research program since Turing (1950), but was first articulated explicitly, in various forms, by Putnam (1960), Armstrong (1970) and Lewis (1970), and has become known as _functionalism_. From here, it is a very short step to _computationalism_, the view that computational structure is what is important in capturing the essence of mentality. This step follows from a belief that any abstract causal structure can be captured computationally: a belief made plausible by the Church–Turing Thesis, which articulates the power
Clark, Andy (1993). Superpositional connectionism: A reply to Marinov. Minds and Machines 3 (3):271-81.   (Cited by 2 | Google | More links)
Abstract:   Marinov''s critique I argue, is vitiated by its failure to recognize the distinctive role of superposition within the distributed connectionist paradigm. The use of so-called subsymbolic distributed encodings alone is not, I agree, enough to justify treating distributed connectionism as a distinctive approach. It has always been clear that microfeatural decomposition is both possible and actual within the confines of recognizably classical approaches. When such approaches also involve statistically-driven learning algorithms — as in the case of ID3 — the fundamental differences become even harder to spot. To see them, it is necessary to consider not just the nature of an acquired input-output function but the nature of the representational scheme underlying it. Differences between such schemes make themselves best felt outside the domain of immediate problem solving. It is in the more extended contexts of performance DURING learning and cognitive change as a result of SUBSEQUENT training on new tasks (or simultaneous training on several tasks) that the effects of superpositional storage techniques come to the fore. I conclude that subsymbols, distribution and statistically driven learning alone are indeed not of the essence. But connectionism is not just about subsymbols and distribution. It is about the generation of whole subsymbol SYSTEMS in which multiple distributed representations are created and superposed
Cleeremans, Axel (1998). The other hard problem: How to bridge the gap between subsymbolic and symbolic cognition. Behavioral and Brain Sciences 21 (1):22-23.   (Google | More links)
Abstract: The constructivist notion that features are purely functional is incompatible with the classical computational metaphor of mind. I suggest that the discontent expressed by Schyns, Goldstone and Thibaut about fixed-features theories of categorization reflects the growing impact of connectionism, and show how their perspective is similar to recent research on implicit learning, consciousness, and development. A hard problem remains, however: How to bridge the gap between subsymbolic and symbolic cognition
Hofstadter, Douglas R. (1983). Artificial intelligence: Subcognition as computation. In Fritz Machlup (ed.), The Study of Information: Interdisciplinary Messages. Wiley.   (Cited by 12 | Annotation | Google)
Marinov, Marin (1993). On the spuriousness of the symbolic/subsymbolic distinction. Minds and Machines 3 (3):253-70.   (Cited by 2 | Annotation | Google | More links)
Abstract:   The article criticises the attempt to establish connectionism as an alternative theory of human cognitive architecture through the introduction of thesymbolic/subsymbolic distinction (Smolensky, 1988). The reasons for the introduction of this distinction are discussed and found to be unconvincing. It is shown that thebrittleness problem has been solved for a large class ofsymbolic learning systems, e.g. the class oftop-down induction of decision-trees (TDIDT) learning systems. Also, the process of articulating expert knowledge in rules seems quite practical for many important domains, including common sense knowledge.The article discusses several experimental comparisons betweenTDIDT systems and artificial neural networks using the error backpropagation algorithm (ANNs usingBP). The properties of one of theTDIDT systemsID3 (Quinlan, 1986a) are examined in detail. It is argued that the differences in performance betweenANNs usingBP andTDIDT systems reflect slightly different inductive biases but are not systematic; these differences do not support the view that symbolic and subsymbolic systems are fundamentally incompatible. It is concluded, that thesymbolic/subsymbolic distinction is spurious. It cannot establish connectionism as an alternative cognitive architecture
Rosenberg, Jay F. (1990). Treating connectionism properly: Reflections on Smolensky. Psychological Research 52.   (Cited by 5 | Annotation | Google)
Smolensky, Paul (1987). Connectionist, symbolic, and the brain. AI Review 1:95-109.   (Cited by 18 | Annotation | Google)
Smolensky, Paul (1988). On the proper treatment of connectionism. Behavioral and Brain Sciences 11:1-23.   (Cited by 902 | Annotation | Google)

6.3f Philosophy of Connectionism, Misc

Abrahamsen, Adele A. (1993). Cognizers' innards and connectionist nets: A holy alliance? Mind and Language 8 (4):520-530.   (Cited by 2 | Google | More links)
Aizawa, Kenneth (1999). Connectionist rules: A rejoinder to Horgan and Tienson's connectionism and the philosophy of psychology. Acta Analytica 22 (22):59-85.   (Cited by 3 | Google)
Bechtel, William P. (1985). Are the new PDP models of cognition cognitivist or associationist? Behaviorism 13:53-61.   (Google)
Bechtel, William P. & Abrahamson, A. (1990). Beyond the exclusively propositional era. Synthese 82 (2):223-53.   (Cited by 9 | Annotation | Google | More links)
Abstract:   Contemporary epistemology has assumed that knowledge is represented in sentences or propositions. However, a variety of extensions and alternatives to this view have been proposed in other areas of investigation. We review some of these proposals, focusing on (1) Ryle's notion of knowing how and Hanson's and Kuhn's accounts of theory-laden perception in science; (2) extensions of simple propositional representations in cognitive models and artificial intelligence; (3) the debate concerning imagistic versus propositional representations in cognitive psychology; (4) recent treatments of concepts and categorization which reject the notion of necessary and sufficient conditions; and (5) parallel distributed processing (connectionist) models of cognition. This last development is especially promising in providing a flexible, powerful means of representing information nonpropositionally, and carrying out at least simple forms of inference without rules. Central to several of the proposals is the notion that much of human cognition might consist in pattern recognition rather than manipulation of rules and propositions
Bechtel, William P. (1988). Connectionism and rules and representation systems: Are they compatible? Philosophical Psychology 1 (1):5-16.   (Cited by 43 | Annotation | Google)
Abstract: The introduction of connectionist or parallel distributed processing (PDP) systems to model cognitive functions has raised the question of the possible relations between these models and traditional information processing models which employ rules to manipulate representations. After presenting a brief account of PDP models and two ways in which they are commonly interpreted by those seeking to use them to explain cognitive functions, I present two ways one might relate these models to traditional information processing models and so not totally repudiate the tradition of modelling cognition through systems of rules and representations. The proposal that seems most promising is that PDP-type structures might provide the underlying framework in which a rule and representation model might be implemented. To show how one might pursue such a strategy, I discuss recent research by Barsalou on the instability of concepts and show how that might be accounted for in a system whose microstructure had a PDP architecture. I also outline how adopting a multi-leveled view of the mind, where on one level the mind employed a PDP-type system and at another level constituted a rule processing system, would allow researchers to relocate some problems which seemed difficult to explain at one level, such as the capacity for concept learning, to another level where it could be handled in a straightforward manner
Bechtel, William P. (1987). Connectionism and the philosophy of mind. Southern Journal of Philosophy Supplement 26:17-41.   (Cited by 18 | Annotation | Google)
Bechtel, William P. & Abrahamsen, Adele A. (1992). Connectionism and the future of folk psychology. In Robert G. Burton (ed.), Minds: Natural and Artificial. SUNY Press.   (Cited by 3 | Google | More links)
Bechtel, William P. (1985). Contemporary connectionism: Are the new parallel distributed processing models cognitive or associationist? Behaviorism 13:53-61.   (Cited by 8 | Google)
Bechtel, William P. (1993). The case for connectionism. Philosophical Studies 71 (2):119-54.   (Cited by 5 | Google | More links)
Bechtel, William P. (1993). The path beyond first-order connectionism. Mind and Language 8 (4):531-539.   (Cited by 6 | Google | More links)
Bechtel, William P. (1986). What happens to accounts of mind-brain relations if we forgo an architecture of rules and representations? Philosophy of Science Association 1986.   (Annotation | Google)
Bechtel, William P. (1996). What should a connectionist philosophy of science look like? In The Churchlands and Their Critics. Oup.   (Cited by 5 | Google | More links)
Abstract: The reemergence of connectionism2 has profoundly altered the philosophy of mind. Paul Churchland has argued that it should equally transform the philosophy of science. He proposes that connectionism offers radical and useful new ways of understanding theories and explanations
Berkeley, Istvan S. N. (ms). A revisionist history of connectionism.   (Cited by 1 | Google)
Abstract: According to the standard (recent) history of connectionism (see for example the accounts offered by Hecht-Nielsen (1990: pp. 14-19) and Dreyfus and Dreyfus (1988), or Papert's (1988: pp. 3-4) somewhat whimsical description), in the early days of Classical Computational Theory of Mind (CCTM) based AI research, there was also another allegedly distinct approach, one based upon network models. The work on network models seems to fall broadly within the scope of the term 'connectionist' (see Aizawa 1992), although the term had yet to be coined at the time. These two approaches were "two daughter sciences" according to Papert (1988: p. 3). The fundamental difference between these two 'daughters', lay (according to Dreyfus and Dreyfus (1988: p. 16)) in what they took to be the paradigm of intelligence. Whereas the early connectionists took learning to be fundamental, the traditional school concentrated upon problem solving
Berkeley, István S. N. (online). Some myths of connectionism.   (Cited by 1 | Google)
Abstract: Since the emergence of what Fodor and Pylyshyn (1988) call 'new connectionism', there can be little doubt that connectionist research has become a significant topic for discussion in the Philosophy of Cognitive Science and the Philosophy of Mind. In addition to the numerous papers on the topic in philosophical journals, almost every recent book in these areas contain at least a brief reference to, or discussion of, the issues raised by connectionist research (see Sterelny 1990, Searle, 1992, and O Nualláin, 1995, for example). Other texts have focused almost exclusively upon connectionist issues (see Clark, 1993, Bechtel and Abrahamsen, 1991 and Lloyd, 1989, for example). Regrettably the discussions of connectionism found in the philosophical literature suffer from a number of deficiencies. My purpose in this paper is to highlight one particular problem and attempt to take a few steps to remedy the situation
Berkeley, Istvan S. N. (online). What is connectionism?   (Google)
Abstract: Connectionism is a style of modeling based upon networks of interconnected simple processing devices. This style of modeling goes by a number of other names too. Connectionist models are also sometimes referred to as 'Parallel Distributed Processing' (or PDP for short) models or networks.1 Connectionist systems are also sometimes referred to as 'neural networks' (abbreviated to NNs) or 'artificial neural networks' (abbreviated to ANNs). Although there may be some rhetorical appeal to this neural nomenclature, it is in fact misleading as connectionist networks are commonly significantly dissimilar to neurological systems. For this reason, I will avoid using this terminology, other than in direct quotations. Instead, I will follow the practice I have adopted above and use 'connectionist' as my primary term for systems of this kind
Bickle, John (1995). Connectionism, reduction, and multiple realizability. Behavior and Philosophy 23 (2):29-39.   (Cited by 3 | Google)
Blackmore, Susan J. (2003). The case of the mysterious mind: Review of Radiant Cool, by Dan Lloyd. New Scientist 13:36-39.   (Cited by 3 | Google | More links)
Bradshaw, Denny E. (1991). Connectionism and the specter of representationalism. In Terence E. Horgan & John L. Tienson (eds.), Connectionism and the Philosophy of Mind. Kluwer.   (Cited by 4 | Annotation | Google)
Christie, Drew (1993). Comments on Bechtel's The Case for Connectionism. Philosophical Studies 71 (2):155-162.   (Cited by 1 | Google | More links)
Churchland, Patricia S. & Sejnowski, Terrence J. (1989). Neural representation and neural computation. In L. Nadel (ed.), Neural Connections, Mental Computations. MIT Press.   (Cited by 78 | Annotation | Google | More links)
Churchland, Paul M. (1989). On the nature of explanation: A PDP approach. In A Neurocomputational Perspective. MIT Press.   (Cited by 9 | Annotation | Google | More links)
Churchland, Paul M. (1989). On the nature of theories: A neurocomputational perspective. Minnesota Studies in the Philosophy of Science 14.   (Cited by 22 | Annotation | Google)
Clark, Andy (1990). Connectionism, competence and explanation. British Journal for the Philosophy of Science 41 (June):195-222.   (Cited by 25 | Annotation | Google | More links)
Abstract: A competence model describes the abstract structure of a solution to some problem. or class of problems, facing the would-be intelligent system. Competence models can be quite derailed, specifying far more than merely the function to be computed. But for all that, they are pitched at some level of abstraction from the details of any particular algorithm or processing strategy which may be said to realize the competence. Indeed, it is the point and virtue of such models to specify some equivalence class of algorithms/processing strategies so that the common properties highlighted by the chosen class may feature in psychologically interesting accounts. A question arises concerning the type of relation a theorist might expect to hold between such a competence model and a psychologically real processing strategy. Classical work in cognitive science expects the actual processing to depend on explicit or tacit knowledge of the competence theory. Connectionist work, for reasons to be explained, represents a departure from this norm. But the precise way in which a connectionist approach may disturb the satisfying classical symmetry of competence and processing has yet to be properly specified. A standard ?Newtonian? connectionist account, due to Paul Smolensky, is discussed and contrasted with a somewhat different ?rogue? account. A standard connectionist understanding has it that a classical competence theory describes an idealized subset of a network's behaviour. But the network's behaviour is not to be explained by its embodying explicit or tacit knowledge of the information laid out in the competence theory. A rogue model, by contrast, posits either two systems, or two aspects of a single system, such that one system does indeed embody the knowledge laid out in the competence theory
Clark, Andy (1995). Connectionist minds. In Connectionism: Debates on Psychological Explanation. Cambridge: Blackwell.   (Cited by 10 | Google)
Clark, Andy (1989). Microcognition. MIT Press.   (Cited by 300 | Annotation | Google | More links)
Clark, Andy (1989). Microfunctionalism: Connectionism and the Scientific Explanation of Mental States. In A. Clark (ed.), Microcognition: Philosophy, Cognitive Science, and Parallel Distributed Processing. MIT Press.   (Google | More links)
Abstract: This is an amended version of material that first appeared in A. Clark, Microcognition: Philosophy, Cognitive Science, and Parallel Distributed Processing (MIT Press, Cambridge, MA, 1989), Ch. 1, 2, and 6. It appears in German translation in Metzinger,T (Ed) DAS LEIB-SEELE-PROBLEM IN DER ZWEITEN HELFTE DES 20 JAHRHUNDERTS (Frankfurt am Main: Suhrkamp. 1999)
Clark, Andy (1991). Microcognition: Philosophy, Cognitive Science, and Parallel Distributed Processing. Cambridge: MIT Press.   (Cited by 224 | Google | More links)
Clark, Andy & Eliasmith, Chris (2002). Philosophical issues in brain theory and connectionism. In Michael A. Arbib (ed.), The Handbook of Brain Theory and Neural Networks, Second Edition. MIT Press.   (Cited by 7 | Google | More links)
Collier, Mark (1999). Filling the Gaps: Hume and Connectionism on the Continued Existence of Unperceived Objects". Hume Studies 25 (1 and 2):155-170.   (Google)
Copeland, Jack (1996). On Alan Turing's anticipation of connectionism. Synthese 108 (3):361-377.   (Cited by 20 | Google | More links)
Abstract:   It is not widely realised that Turing was probably the first person to consider building computing machines out of simple, neuron-like elements connected together into networks in a largely random manner. Turing called his networks unorganised machines. By the application of what he described as appropriate interference, mimicking education an unorganised machine can be trained to perform any task that a Turing machine can carry out, provided the number of neurons is sufficient. Turing proposed simulating both the behaviour of the network and the training process by means of a computer program. We outline Turing's connectionist project of 1948
Cummins, Robert E. (1995). Connectionist and the rationale constraint on cognitive explanations. Philosophical Perspectives 9:105-25.   (Cited by 3 | Google | More links)
Cummins, Robert E. & Schwarz, Georg (1991). Connectionism, computation, and cognition. In Terence E. Horgan & John L. Tienson (eds.), Connectionism and the Philosophy of Mind. Kluwer.   (Cited by 55 | Annotation | Google)
Cummins, Robert E. & Schwarz, Georg (1987). Radical connectionism. Southern Journal of Philosophy Supplement 26:43-61.   (Cited by 8 | Annotation | Google)
Davies, Martin (1989). Connectionism, modularity and tacit knowledge. British Journal for the Philosophy of Science 40 (December):541-55.   (Cited by 11 | Annotation | Google | More links)
Abstract: In this paper, I define tacit knowledge as a kind of causal-explanatory structure, mirroring the derivational structure in the theory that is tacitly known. On this definition, tacit knowledge does not have to be explicitly represented. I then take the notion of a modular theory, and project the idea of modularity to several different levels of description: in particular, to the processing level and the neurophysiological level. The fundamental description of a connectionist network lies at a level between the processing level and the physiological level. At this level, connectionism involves a characteristic departure from modularity, and a correlative absence of syntactic structure. This is linked to the fact that tacit knowledge descriptions of networks are only approximately true. A consequence is that strict causal systematicity in cognitive processes poses a problem for the connectionist programme
Duran, Jane & Doell, Ruth (1993). Naturalized epistemology, connectionism and biology. Dialectica 47 (4):327-336.   (Google)
García-Carpintero, Manuel (1995). The philosophical import of connectionism: A critical notice of Andy Clark's associative engines. Mind and Language 10 (4):370-401.   (Cited by 1 | Google)
Globus, Gordon G. (1992). Derrida and connectionism: Differance in neural nets. Philosophical Psychology 5 (2):183-97.   (Cited by 2 | Google)
Abstract: A possible relation between Derrida's deconstruction of metaphysics and connectionism is explored by considering diff rance in neural nets terms. First diff rance , as the crossing of Saussurian difference and Freudian deferral, is modeled and then the fuller 'sheaf of diff rance is taken up. The metaphysically conceived brain has two versions: in the traditional computational version the brain processes information like a computer and in the connectionist version the brain computes input vector to output vector transformations non-symbolically. The 'deconstructed brain' neither processes information nor computes functions but is spontaneously economical
Hadley, Robert F. (1999). Connectionism and novel combinations of skills: Implications for cognitive architecture. Minds and Machines 9 (2):197-221.   (Cited by 11 | Google | More links)
Abstract:   In the late 1980s, there were many who heralded the emergence of connectionism as a new paradigm – one which would eventually displace the classically symbolic methods then dominant in AI and Cognitive Science. At present, there remain influential connectionists who continue to defend connectionism as a more realistic paradigm for modeling cognition, at all levels of abstraction, than the classical methods of AI. Not infrequently, one encounters arguments along these lines: given what we know about neurophysiology, it is just not plausible to suppose that our brains are digital computers. Thus, they could not support a classical architecture. I argue here for a middle ground between connectionism and classicism. I assume, for argument's sake, that some form(s) of connectionism can provide reasonably approximate models – at least for lower-level cognitive processes. Given this assumption, I argue on theoretical and empirical grounds that most human mental skills must reside in separate connectionist modules or sub-networks. Ultimately, it is argued that the basic tenets of connectionism, in conjunction with the fact that humans often employ novel combinations of skill modules in rule following and problem solving, lead to the plausible conclusion that, in certain domains, high level cognition requires some form of classical architecture. During the course of argument, it emerges that only an architecture with classical structure could support the novel patterns of information flow and interaction that would exist among the relevant set of modules. Such a classical architecture might very well reside in the abstract levels of a hybrid system whose lower-level modules are purely connectionist
Hatfield, Gary (1990). Gibsonian representations and connectionist symbol-processing: Prospects for unification. Psychological Research 52:243-52.   (Cited by 5 | Annotation | Google)
Horgan, Terence E. & Tienson, John L. (1999). Authors' replies. Acta Analytica 22 (22):275-287.   (Google)
Horgan, Dianne D. & Hacker, Douglas J. (1999). Beginning a theoretician-practitioner dialogue about connectionism. Acta Analytica 22 (22):261-273.   (Google)
Horgan, Terence E. & Tienson, John L. (eds.) (1991). Connectionism and the Philosophy of Mind. Kluwer.   (Cited by 30 | Google)
Abstract: "A third of the papers in this volume originated at the 1987 Spindel Conference ... at Memphis State University"--Pref.
Horgan, Terence E. & Tienson, John L. (1996). Connectionism and the Philosophy of Psychology. MIT Press.   (Cited by 123 | Google)
Abstract: In Connectionism and the Philosophy of Psychology, Horgan and Tienson articulate and defend a new view of cognition.
Horgan, Terence E. (1997). Connectionism and the philosophical foundations of cognitive science. Metaphilosophy 28 (1-2):1-30.   (Cited by 5 | Google | More links)
Horgan, Terence E. (1997). Modelling the noncomputational mind: Reply to Litch. Philosophical Psychology 10 (3):365-371.   (Google)
Abstract: I explain why, within the nonclassical framework for cognitive science we describe in the book, cognitive-state transitions can fail to be tractably computable even if they are subserved by a discrete dynamical system whose mathematical-state transitions are tractably computable. I distinguish two ways that cognitive processing might conform to programmable rules in which all operations that apply to representation-level structure are primitive, and two corresponding constraints on models of cognition. Although Litch is correct in maintaining that classical cognitive science is not committed to the first constraint, it is committed to the second. This fact constitutes an illuminating gloss on our claim that one foundational assumption of classicism is that human cognition conforms to programmable, representation-level, rules
Horgan, Terence E. (1999). Short prcis of connectionism and the philosophy of psychology. Acta Analytica 22 (22):9-21.   (Cited by 4 | Google)
Humphreys, Glyn W. (1986). Information-processing systems which embody computational rules: The connectionist approach. Mind and Language 1:201-12.   (Cited by 2 | Google)
Kirsh, David (1992). PDP Learnability and Innate Knowledge of Language. In S. Davis (ed.), Connectionism: Theory and practice (Volume III of The Vancouver Studies in Cognitive Science. Oxford University press.   (Google)
Abstract: It is sometimes argued that if PDP networks can be trained to make correct judgements of grammaticality we have an existence proof that there is enough information in the stimulus to permit learning grammar by inductive means alone. This seems inconsistent superficially with Gold's theorem and at a deeper level with the fact that networks are designed on the basis of assumptions about the domain of the function to be learned. To clarify the issue I consider what we should learn from Gold's theorem, then go on to inquire into what it means to say that knowledge is domain specific. I first try sharpening the intuitive notion of domain specific knowledge by reviewing the alleged difference between processing limitatons due to shartage of resources vs shortages of knowledge. After rejecting different formulations of this idea, I suggest that a model is language specific if it transparently refer to entities and facts about language as opposed to entities and facts of more general mathematical domains. This is a useful but not necessary condition. I then suggest that a theory is domain specific if it belongs to a model family which is attuned in a law-like way to domain regularities. This leads to a comparison of PDP and parameter setting models of language learning. I conclude with a novel version of the poverty of stimulus argument.
Laakso, Aarre & Cottrell, Garrison W. (2006). Churchland on connectionism. In Brian L. Keeley (ed.), Paul Churchland. Cambridge: Cambridge University Press.   (Google)
Legg, C. R. (1988). Connectionism and physiological psychology: A marriage made in heaven? Philosophical Psychology 1:263-78.   (Google)
Litch, Mary (1997). Computation, connectionism and modelling the mind. Philosophical Psychology 10 (3):357-364.   (Google)
Abstract: Any analysis of the concept of computation as it occurs in the context of a discussion of the computational model of the mind must be consonant with the philosophic burden traditionally carried by that concept as providing a bridge between a physical and a psychological description of an agent. With this analysis in hand, one may ask the question: are connectionist-based systems consistent with the computational model of the mind? The answer depends upon which of several versions of connectionism one presupposes: non-learning connectionist-based systems as simulated on digital computers are consistent with the computational model of the mind, whereas connectionist-based systems (/dynamical systems) qua analog systems are not
Litch, Mary (1999). Learning connectionist networks and the philosophy of psychology. Acta Analytica 22 (22):87-110.   (Google)
Lloyd, Dan (1994). Connectionist hysteria: Reducing a Freudian case study to a network model. Philosophy, Psychiatry, and Psychology 1 (2):69-88.   (Cited by 10 | Google)
Lloyd, Dan (1989). Parallel distributed processing and cognition: Only connect? In Simple Minds. MIT Press.   (Annotation | Google)
Lycan, William G. (1991). Homuncular functionalism meets PDP. In William Ramsey, Stephen P. Stich & D. Rumelhart (eds.), Philosophy and Connectionist Theory. Lawrence Erlbaum.   (Cited by 7 | Annotation | Google)
Macdonald, C. (ed.) (1995). Connectionism: Debates on Psychological Explanation. Blackwell.   (Cited by 36 | Google)
McLaughlin, Brian P. (1987). Tye on connectionism. Southern Journal of Philosophy (Suppl.) 185:185-193.   (Cited by 2 | Google)
Mills, Stephen L. (1993). Wittgenstein and connectionism: A significant complementarity? Philosophy 34:137-157.   (Cited by 4 | Google)
Miscevic, Nenad (1994). Connectionism and epistemic value. Acta Analytica 12 (12):19-37.   (Google)
Nenon, Thomas J. (1994). Connectionism and phenomenology. In Phenomenology of the Cultural Disciplines. Dordrecht: Kluwer.   (Google)
Niklasson, L. F. & van Gelder, Tim (online). Can connectionist models exhibit non-classical structure sensitivity?   (Cited by 30 | Google | More links)
Abstract: Department of Computer Science Philosophy Program, Research School of Social Sciences University of Skövde, S-54128, SWEDEN Australian National University, Canberra ACT 0200
O'Brien, Gerard & Opie, Jonathan (2002). Radical connectionism: Thinking with (not in) language. Language and Communication 22 (3):313-329.   (Cited by 12 | Google | More links)
Abstract: In this paper we defend a position we call radical connectionism. Radical connectionism claims that cognition _never_ implicates an internal symbolic medium, not even when natural language plays a part in our thought processes. On the face of it, such a position renders the human capacity for abstract thought quite mysterious. However, we argue that connectionism is committed to an analog conception of neural computation, and that representation of the abstract is no more problematic for a system of analog vehicles than for a symbol system. Natural language is therefore not required as a representational medium for abstract thought. Since natural language is arguably not a representational medium _at all_, but a conventionally governed scheme of communicative signals, we suggest that the role of internalised (i.e., self- directed) language is best conceived in terms of the coordination and control of cognitive activities within the brain
Piccinini, Gualtiero (2007). Connectionist computation. In Gualtiero Piccinini (ed.), Proceedings of the 2007 International Joint Conference on Neural Networks.   (Google)
Abstract: The following three theses are inconsistent: (1) (Paradigmatic) connectionist systems perform computations. (2) Performing computations requires executing programs. (3) Connectionist systems do not execute programs. Many authors embrace (2). This leads them to a dilemma: either connectionist systems execute programs or they don't compute. Accordingly, some authors attempt to deny (1), while others attempt to deny (3). But as I will argue, there are compelling reasons to accept both (1) and (3). So, we should replace (2) with a more satisfactory account of computation. Once we do, we can see more clearly what is peculiar to connectionist computation.
Place, Ullin T. (1999). Connectionism and the problem of consciousness. Acta Analytica 22 (22):197-226.   (Google)
Plunkett, Kim (2001). Connectionism today. Synthese 129 (2):185-194.   (Cited by 2 | Google | More links)
Abstract:   Connectionist networks have been used to model a wide range of cognitivephenomena, including developmental, neuropsychological and normal adultbehaviours. They have offered radical alternatives to traditional accounts ofwell-established facts about cognition. The primary source of the success ofthese models is their sensitivity to statistical regularities in their trainingenvironment. This paper provides a brief description of the connectionisttoolbox and how this has developed over the past 2 decades, with particularreference to the problem of reading aloud
Ramsey, William & Stich, Stephen P. (1990). Connectionism and three levels of nativism. Synthese 82 (2):177-205.   (Cited by 14 | Annotation | Google | More links)
Abstract:   Along with the increasing popularity of connectionist language models has come a number of provocative suggestions about the challenge these models present to Chomsky's arguments for nativism. The aim of this paper is to assess these claims. We begin by reconstructing Chomsky's argument from the poverty of the stimulus and arguing that it is best understood as three related arguments, with increasingly strong conclusions. Next, we provide a brief introduction to connectionism and give a quick survey of recent efforts to develop networks that model various aspects of human linguistic behavior. Finally, we explore the implications of this research for Chomsky's arguments. Our claim is that the relation between connectionism and Chomsky's views on innate knowledge is more complicated than many have assumed, and that even if these models enjoy considerable success the threat they pose for linguistic nativism is small
Ramsey, William; Stich, Stephen P. & Rumelhart, D. M. (eds.) (1991). Philosophy and Connectionist Theory. Lawrence Erlbaum.   (Cited by 46 | Google)
Abstract: The philosophy of cognitive science has recently become one of the most exciting and fastest growing domains of philosophical inquiry and analysis. Until the early 1980s, nearly all of the models developed treated cognitive processes -- like problem solving, language comprehension, memory, and higher visual processing -- as rule-governed symbol manipulation. However, this situation has changed dramatically over the last half dozen years. In that period there has been an enormous shift of attention toward connectionist models of cognition that are inspired by the network-like architecture of the brain. Because of their unique architecture and style of processing, connectionist systems are generally regarded as radically different from the more traditional symbol manipulation models. This collection was designed to provide philosophers who have been working in the area of cognitive science with a forum for expressing their views on these recent developments. Because the symbol-manipulating paradigm has been so important to the work of contemporary philosophers, many have watched the emergence of connectionism with considerable interest. The contributors take very different stands toward connectionism, but all agree that the potential exists for a radical shift in the way many philosophers think of various aspects of cognition. Exploring this potential and other philosophical dimensions of connectionist research is the aim of this volume
Rosenberg, Jay F. (1989). Connectionism and cognition. Bielefeld Report.   (Cited by 7 | Annotation | Google)
Sehon, Scott R. (1998). Connectionism and the causal theory of action explanation. Philosophical Psychology 11 (4):511-532.   (Cited by 2 | Google)
Abstract: It is widely assumed that common sense psychological explanations of human action are a species of causal explanation. I argue against this construal, drawing on Ramsey et al.'s paper, “Connectionism, eliminativism, and the future of folk psychology”. I argue that if certain connec-tionist models are correct, then mental states cannot be identified with functionally discrete causes of behavior, and I respond to some recent attempts to deny this claim. However, I further contend that our common sense psychological practices are not committed to the falsity of such connectionist models. The paper concludes that common sense psychology is not committed to the identification of mental states with functionally discrete causes of behavior, and hence that common sense psychology is not committed to the causal account of action explanation
Shanon, Benny (1992). Are connectionist models cognitive? Philosophical Psychology 5 (3):235-255.   (Cited by 5 | Annotation | Google)
Abstract: In their critique of connectionist models Fodor and Pylyshyn (1988) dismiss such models as not being cognitive or psychological. Evaluating Fodor and Pylyshyn's critique requires examining what is required in characterizating models as 'cognitive'. The present discussion examines the various senses of this term. It argues the answer to the title question seems to vary with these different senses. Indeed, by one sense of the term, neither representa-tionalism nor connectionism is cognitive. General ramifications of such an appraisal are discussed and alternative avenues for cognitive research are suggested
Smith, Barry (1997). The connectionist mind: A study of Hayekian psychology. In Stephen F. Frowen (ed.), Hayek: Economist and Social Philosopher: A Critical Retrospect. St. Martin's Press.   (Cited by 16 | Google | More links)
Abstract: Introduction I shall begin my remarks with some discussion of recent work in cognitive science, and the participants in this meeting might find it useful to note that I might equally well have chosen as title of my paper something like 'Artificial Intelligence and the Free Market Order'. They might care to note also that I am, as far as the achievements and goals of research in artificial intelligence are concerned, something of a sceptic. My appeal to cognitive science in what follows is designed to serve clarificatory ends, and to raise new questions, of a sort which will become clear as the paper progresses
Stark, Herman E. (1994). Connectionism and the form of rational norms. Acta Analytica 12 (12):39-53.   (Cited by 2 | Google)
Sterelny, Kim (1990). Connectionism. In The Representational Theory of Mind. Blackwell.   (Google)
Thagard, Paul R. (1989). Connectionism and epistemology: Goldman on Winner-take-all networks. Philosophia 19 (2-3):189-196.   (Cited by 1 | Google | More links)
Tienson, John L. (1987). Introduction to connectionism. Southern Journal of Philosophy (Suppl.) 1:1-16.   (Cited by 15 | Google)
van Gelder, Tim (1993). Connectionism and the mind-body problem: Exposing the distinction between mind and cognition. Artificial Intelligence Review 7:355-369.   (Google)

6.3g Philosophy of Connectionism, Foundational Empirical Issues

Aizawa, Kenneth (1992). Connectionism and artificial intelligence: History and philosophical interpretation. Journal for Experimental and Theoretical Artificial Intelligence 4:1992.   (Cited by 1 | Google | More links)
Beaman, C. Philip (2000). Neurons amongst the symbols? Behavioral and Brain Sciences 23 (4):468-470.   (Google)
Abstract: Page's target article presents an argument for the use of localist, connectionist models in future psychological theorising. The “manifesto” marshalls a set of arguments in favour of localist connectionism and against distributed connectionism, but in doing so misses a larger argument concerning the level of psychological explanation that is appropriate to a given domain
Berkeley, Istvan S. N. (ms). Connectionism reconsidered: Minds, machines and models.   (Cited by 1 | Google | More links)
Abstract: In this paper the issue of drawing inferences about biological cognitive systems on the basis of connectionist simulations is addressed. In particular, the justification of inferences based on connectionist models trained using the backpropagation learning algorithm is examined. First it is noted that a justification commonly found in the philosophical literature is inapplicable. Then some general issues are raised about the relationships between models and biological systems. A way of conceiving the role of hidden units in connectionist networks is then introduced. This, in combination with an assumption about the way evolution goes about solving problems, is then used to suggest a means of justifying inferences about biological systems based on connectionist research
Clark, Andy (1994). Representational trajectories in connectionist learning. Minds and Machines 4 (3):317-32.   (Cited by 5 | Annotation | Google | More links)
Abstract:   The paper considers the problems involved in getting neural networks to learn about highly structured task domains. A central problem concerns the tendency of networks to learn only a set of shallow (non-generalizable) representations for the task, i.e., to miss the deep organizing features of the domain. Various solutions are examined, including task specific network configuration and incremental learning. The latter strategy is the more attractive, since it holds out the promise of a task-independent solution to the problem. Once we see exactly how the solution works, however, it becomes clear that it is limited to a special class of cases in which (1) statistically driven undersampling is (luckily) equivalent to task decomposition, and (2) the dangers of unlearning are somehow being minimized. The technique is suggestive nonetheless, for a variety of developmental factors may yield the functional equivalent of both statistical AND informed undersampling in early learning
Clark, Andy & Thornton, S. (1997). Trading spaces: Computation, representation, and the limits of uninformed learning. Behavioral and Brain Sciences 20 (1):57-66.   (Cited by 204 | Google | More links)
Clark, Andy & Thornton, Chris (1997). Relational learning re-examined. Behavioral and Brain Sciences 20 (1):83-90.   (Google)
Cliff, D. (1990). Computational Neuroethology: A Provisional Manifesto. In Jean-Arcady Meyer & Stewart W. Wilson (eds.), From Animals to Animats: Proceedings of The First International Conference on Simulation of Adaptive Behavior (Complex Adaptive Systems). Cambridge University Press.   (Cited by 103 | Annotation | Google | More links)
Dawson, Michael R. W. & Schopflocher, D. P. (1992). Autonomous processing in parallel distributed processing networks. Philosophical Psychology 5 (2):199-219.   (Google)
Abstract: This paper critically examines the claim that parallel distributed processing (PDP) networks are autonomous learning systems. A PDP model of a simple distributed associative memory is considered. It is shown that the 'generic' PDP architecture cannot implement the computations required by this memory system without the aid of external control. In other words, the model is not autonomous. Two specific problems are highlighted: (i) simultaneous learning and recall are not permitted to occur as would be required of an autonomous system; (ii) connections between processing units cannot simultaneously represent current and previous network activation as would be required if learning is to occur. Similar problems exist for more sophisticated networks constructed from the generic PDP architecture. We argue that this is because these models are not adequately constrained by the properties of the functional architecture assumed by PDP modelers. It is also argued that without such constraints, PDP researchers cannot claim to have developed an architecture radically different from that proposed by the Classical approach in cognitive science
Franklin, James (1996). How a neural net grows symbols. Proc 7.   (Cited by 1 | Google)
Abstract: Brains, unlike artificial neural nets, use sym- bols to summarise and reason about percep- tual input. But unlike symbolic AI, they “ground” the symbols in the data: the sym- bols have meaning in terms of data, not just meaning imposed by the outside user. If neu- ral nets could be made to grow their own sym- bols in the way that brains do, there would be a good prospect of combining neural networks and symbolic AI, in such a way as to combine the good features of each
Graham, George (1987). Connectionism in Pavlovian harness. Southern Journal of Philosophy (Suppl.) 73:73-91.   (Cited by 2 | Google)
Hanson, Susan & Burr, D. (1990). What connectionist models learn. Behavioral and Brain Sciences.   (Cited by 81 | Annotation | Google)
Kaplan, S.; Weaver, M. & French, Robert M. (1990). Active symbols and internal models: Towards a cognitive connectionism. AI and Society.   (Cited by 18 | Annotation | Google | More links)
Kirsh, David (1987). Putting a price on cognition. Southern Journal of Philosophy Supplement 26:119-35.   (Cited by 9 | Annotation | Google)
Lachter, J. & Bever, Thomas G. (1988). The relation between linguistic structure and associative theories of language learning. Cognition 28:195-247.   (Cited by 66 | Annotation | Google | More links)
Mills, Stephen L. (1989). Connectionism, the classical theory of cognition, and the hundred step constraint. Acta Analytica 4 (4):5-38.   (Google)
Nelson, Raymond J. (1989). Philosophical issues in Edelman's neural darwinism. Journal of Experimental and Theoretical Artificial Intelligence 1:195-208.   (Cited by 2 | Annotation | Google | More links)
Oaksford, Mike; Chater, Nick & Stenning, Keith (1990). Connectionism, classical cognitive science and experimental psychology. AI and Society.   (Cited by 11 | Annotation | Google | More links)
O'Brien, Gerard (1998). The role of implementation in connectionist explanation. Psycoloquy 9 (6).   (Cited by 8 | Google | More links)
Pinker, Steven & Prince, Alan (1988). On language and connectionism. Cognition 28:73-193.   (Cited by 612 | Annotation | Google | More links)
Potrc, Matjaz (1995). Consciousness and connectionism--the problem of compatability of type identity theory and of connectionism. Acta Analytica 13 (13):175-190.   (Google)
Ross, Don (1998). Internal recurrence. Dialogue 37 (1):155-161.   (Google)
Roth, Martin (2005). Program execution in connectionist networks. Mind and Language 20 (4):448-467.   (Cited by 1 | Google | More links)
Abstract: Recently, connectionist models have been developed that seem to exhibit structuresensitive cognitive capacities without executing a program. This paper examines one such model and argues that it does execute a program. The argument proceeds by showing that what is essential to running a program is preserving the functional structure of the program. It has generally been assumed that this can only be done by systems possessing a certain temporalcausal organization. However, counterfactualpreserving functional architecture can be instantiated in other ways, for example geometrically, which are realizable by connectionist networks