Papers on AI and Computation (David Chalmers)

The papers here deal with various aspects of the foundations of artificial intelligence. Some deal with relatively abstract matters about the analysis of computation and its relation to cognition, or about the possibility of artificial intelligence. Others deal with more empirical matters concerning various sorts of artificial intelligence, including symbolic AI, connectionism, and artificial life.

The Singularity: A Philosophical Analysis. Journal of Consciousness Studies 17:7-65, 2010.

The Matrix as Metaphysics. In (C. Grau, ed) Philosophers Explore the Matrix. Oxford University Press, 2005.

A Computational Foundation for the Study of Cognition. Journal of Cognitive Science, forthcoming (2012).

This paper addresses some key questions about computation and its role in cognitive science. I give an account of what it takes for a physical system to implement a given computation (in terms of abstract patterns of causal organization), and use this account to defend "strong artificial intelligence" and justify the centrality of computational explanation in cognitive science. This paper was written in 1993 but unpublished for many years (though section 2 appeared in "On Implementing a Computation" in Minds and Machines, 1994), and is finally coming out as the subject of a 2012 symposium in the Journal of Cognitive Science.

Does A Rock Implement Every Finite-State Automaton? Synthese 108: 309-33, 1996.

In an appendix to his book Representation and Reality, Hilary Putnam "proves" that every ordinary open system implements every finite automaton, so that computation cannot provide a nonvacuous foundation for the sciences of the mind. I analyze Putnam's argument and find it wanting. The argument can be patched up to some extent, but this only points the way to a better definition of implementation (of combinatorial-state automata) that is invulnerable to such an objection. A couple of open questions remain, however. This paper appeared in Synthese 108: 309-33, 1996.

Minds, Machines, and Mathematics. Psyche 2:11-20, 1995.

This is a commentary on Roger Penrose's book Shadows of the Mind, focusing on his attempt to use Gödel's theorem to demonstrate the noncomputability of thought. I argue that the attempt is ultimately unsuccessful, but that there is a novel argument here that many commentators have overlooked, and that it raises many interesting issues. I also comment on his proposals concerning "the missing science of consciousness". This paper appears in PSYCHE, in a symposium on Penrose's book. Penrose replied in "Beyond the Doubting of a Shadow".

High-Level Perception, Analogy, and Representation: A Critique of Artificial Intelligence Methodology

(Co-authored with Bob French and Doug Hofstadter.) This paper argues that high-level perception is crucially involved in most cognitive processing. We mount a critique of the common approach of using "frozen", hand-coded representations in cognitive modeling (exemplified by Langley and Simon's BACON and Gentner's Structure-Mapping Engine), and argue for a different approach in which representations are constructed and molded "on the fly". This paper appeared in the Journal of Experimental and Theoretical Artificial Intelligence in 1992, and also in Hofstadter's book Fluid Concepts and Creative Analogies. There has since been a response by Forbus, Gentner, Markman, and Ferguson, as well as a commentary by Morrison and Dietrich.

Syntactic Transformations on Distributed Representations.

In this paper I demonstrate that a connectionist network can be used to perform systematic structure-sensitive transformations on compressed distributed representations of compositional structures. Using representations developed by a Recursive Auto-Associative Memory (a model of Jordan Pollack's), a feedforward network learns to move systematically from representations of an active sentence to that of a corresponding passive sentence, and vice versa. This paper appeared in Connection Science in 1990. This line of research has since been extended by a number of others, e.g. in Lonnie Chrisman's "Learning Recursive Distributed Representations for Holistic Computation".

Connectionism and Compositionality: Why Fodor and Pylyshyn Were Wrong.

I point out some structural problems with Fodor and Pylyshyn's arguments against connectionism, and trace these to an underestimation of the role of distributed representation. I discuss some empirical results (from the paper above) that have some bearing on Fodor and Pylyshyn's argument. This paper was published in Philosophical Psychology in 1993 (an earlier version was in the 1990 Proceedings of the Cognitive Science Society). See Murat Aydede's "Connectionism and the Language of Thought" for some discussion.

The Evolution of Learning: An Experiment in Genetic Connectionism.

I combine genetic algorithms and neural networks to show how learning mechanisms might evolve in a population of organisms that initially have no capacity to learn. The dynamics of a neural network's cross-time development are specified in a genome, and phenotypes are selected for their ability to learn various tasks across a lifetime. Over many generations, sophisticated learning mechanisms are developed, including on occasion the well-known delta rule. This paper appeared in the Proceedings of the 1990 Connectionist Summer School Workshop. See here for more models of evolution and learning.

Subsymbolic Computation and the Chinese Room.

In this paper I analyze the distinction between symbolic and subsymbolic computation, and use this to shed some light on Searle's "Chinese Room" argument and the associated argument that "syntax is not sufficient for semantics". I argue that subsymbolic models may be less vulnerable to this argument. I no longer think this paper is very good, but perhaps the analysis of symbolic vs. subsymbolic computation is worthwhile. It appeared in The Symbolic and Connectionist Paradigms: Closing the Gap, edited by John Dinsmore, published by Lawrence Erlbaum in 1991.


Go to: