Javascript Menu by Deluxe-Menu.com
MindPapers is now part of PhilPapers: online research in philosophy, a new service with many more features.
 
 Compiled by David Chalmers (Editor) & David Bourget (Assistant Editor), Australian National University. Submit an entry.
 
   
click here for help on how to search

6.3g. Philosophy of Connectionism, Foundational Empirical Issues (Philosophy of Connectionism, Foundational Empirical Issues on PhilPapers)

See also:
Aizawa, Kenneth (1992). Connectionism and artificial intelligence: History and philosophical interpretation. Journal for Experimental and Theoretical Artificial Intelligence 4:1992.   (Cited by 1 | Google | More links)
Beaman, C. Philip (2000). Neurons amongst the symbols? Behavioral and Brain Sciences 23 (4):468-470.   (Google)
Abstract: Page's target article presents an argument for the use of localist, connectionist models in future psychological theorising. The “manifesto” marshalls a set of arguments in favour of localist connectionism and against distributed connectionism, but in doing so misses a larger argument concerning the level of psychological explanation that is appropriate to a given domain
Berkeley, Istvan S. N. (ms). Connectionism reconsidered: Minds, machines and models.   (Cited by 1 | Google | More links)
Abstract: In this paper the issue of drawing inferences about biological cognitive systems on the basis of connectionist simulations is addressed. In particular, the justification of inferences based on connectionist models trained using the backpropagation learning algorithm is examined. First it is noted that a justification commonly found in the philosophical literature is inapplicable. Then some general issues are raised about the relationships between models and biological systems. A way of conceiving the role of hidden units in connectionist networks is then introduced. This, in combination with an assumption about the way evolution goes about solving problems, is then used to suggest a means of justifying inferences about biological systems based on connectionist research
Clark, Andy (1994). Representational trajectories in connectionist learning. Minds and Machines 4 (3):317-32.   (Cited by 5 | Annotation | Google | More links)
Abstract:   The paper considers the problems involved in getting neural networks to learn about highly structured task domains. A central problem concerns the tendency of networks to learn only a set of shallow (non-generalizable) representations for the task, i.e., to miss the deep organizing features of the domain. Various solutions are examined, including task specific network configuration and incremental learning. The latter strategy is the more attractive, since it holds out the promise of a task-independent solution to the problem. Once we see exactly how the solution works, however, it becomes clear that it is limited to a special class of cases in which (1) statistically driven undersampling is (luckily) equivalent to task decomposition, and (2) the dangers of unlearning are somehow being minimized. The technique is suggestive nonetheless, for a variety of developmental factors may yield the functional equivalent of both statistical AND informed undersampling in early learning
Clark, Andy & Thornton, S. (1997). Trading spaces: Computation, representation, and the limits of uninformed learning. Behavioral and Brain Sciences 20 (1):57-66.   (Cited by 204 | Google | More links)
Clark, Andy & Thornton, Chris (1997). Relational learning re-examined. Behavioral and Brain Sciences 20 (1):83-90.   (Google)
Cliff, D. (1990). Computational Neuroethology: A Provisional Manifesto. In Jean-Arcady Meyer & Stewart W. Wilson (eds.), From Animals to Animats: Proceedings of The First International Conference on Simulation of Adaptive Behavior (Complex Adaptive Systems). Cambridge University Press.   (Cited by 103 | Annotation | Google | More links)
Dawson, Michael R. W. & Schopflocher, D. P. (1992). Autonomous processing in parallel distributed processing networks. Philosophical Psychology 5 (2):199-219.   (Google)
Abstract: This paper critically examines the claim that parallel distributed processing (PDP) networks are autonomous learning systems. A PDP model of a simple distributed associative memory is considered. It is shown that the 'generic' PDP architecture cannot implement the computations required by this memory system without the aid of external control. In other words, the model is not autonomous. Two specific problems are highlighted: (i) simultaneous learning and recall are not permitted to occur as would be required of an autonomous system; (ii) connections between processing units cannot simultaneously represent current and previous network activation as would be required if learning is to occur. Similar problems exist for more sophisticated networks constructed from the generic PDP architecture. We argue that this is because these models are not adequately constrained by the properties of the functional architecture assumed by PDP modelers. It is also argued that without such constraints, PDP researchers cannot claim to have developed an architecture radically different from that proposed by the Classical approach in cognitive science
Franklin, James (1996). How a neural net grows symbols. Proc 7.   (Cited by 1 | Google)
Abstract: Brains, unlike artificial neural nets, use sym- bols to summarise and reason about percep- tual input. But unlike symbolic AI, they “ground” the symbols in the data: the sym- bols have meaning in terms of data, not just meaning imposed by the outside user. If neu- ral nets could be made to grow their own sym- bols in the way that brains do, there would be a good prospect of combining neural networks and symbolic AI, in such a way as to combine the good features of each
Graham, George (1987). Connectionism in Pavlovian harness. Southern Journal of Philosophy (Suppl.) 73:73-91.   (Cited by 2 | Google)
Hanson, Susan & Burr, D. (1990). What connectionist models learn. Behavioral and Brain Sciences.   (Cited by 81 | Annotation | Google)
Kaplan, S.; Weaver, M. & French, Robert M. (1990). Active symbols and internal models: Towards a cognitive connectionism. AI and Society.   (Cited by 18 | Annotation | Google | More links)
Kirsh, David (1987). Putting a price on cognition. Southern Journal of Philosophy Supplement 26:119-35.   (Cited by 9 | Annotation | Google)
Lachter, J. & Bever, Thomas G. (1988). The relation between linguistic structure and associative theories of language learning. Cognition 28:195-247.   (Cited by 66 | Annotation | Google | More links)
Mills, Stephen L. (1989). Connectionism, the classical theory of cognition, and the hundred step constraint. Acta Analytica 4 (4):5-38.   (Google)
Nelson, Raymond J. (1989). Philosophical issues in Edelman's neural darwinism. Journal of Experimental and Theoretical Artificial Intelligence 1:195-208.   (Cited by 2 | Annotation | Google | More links)
Oaksford, Mike; Chater, Nick & Stenning, Keith (1990). Connectionism, classical cognitive science and experimental psychology. AI and Society.   (Cited by 11 | Annotation | Google | More links)
O'Brien, Gerard (1998). The role of implementation in connectionist explanation. Psycoloquy 9 (6).   (Cited by 8 | Google | More links)
Pinker, Steven & Prince, Alan (1988). On language and connectionism. Cognition 28:73-193.   (Cited by 612 | Annotation | Google | More links)
Potrc, Matjaz (1995). Consciousness and connectionism--the problem of compatability of type identity theory and of connectionism. Acta Analytica 13 (13):175-190.   (Google)
Ross, Don (1998). Internal recurrence. Dialogue 37 (1):155-161.   (Google)
Roth, Martin (2005). Program execution in connectionist networks. Mind and Language 20 (4):448-467.   (Cited by 1 | Google | More links)
Abstract: Recently, connectionist models have been developed that seem to exhibit structuresensitive cognitive capacities without executing a program. This paper examines one such model and argues that it does execute a program. The argument proceeds by showing that what is essential to running a program is preserving the functional structure of the program. It has generally been assumed that this can only be done by systems possessing a certain temporalcausal organization. However, counterfactualpreserving functional architecture can be instantiated in other ways, for example geometrically, which are realizable by connectionist networks