Javascript Menu by Deluxe-Menu.com
MindPapers is now part of PhilPapers: online research in philosophy, a new service with many more features.
 
 Compiled by David Chalmers (Editor) & David Bourget (Assistant Editor), Australian National University. Submit an entry.
 
   
click here for help on how to search

6.2e. Computation and Representation, Misc (Computation and Representation, Misc on PhilPapers)

See also:
Akman, Varol & ten Hagen, Paul J. W. (1989). The power of physical representations. AI Magazine 10 (3):49-65.   (Cited by 10 | Google | More links)
Bailey, Andrew R. (1994). Representations versus regularities: Does computation require representation? Eidos 12 (1):47-58.   (Google)
Chalmers, David J.; French, Robert M. & Hofstadter, Douglas R. (1992). High-level perception, representation, and analogy:A critique of artificial intelligence methodology. Journal of Experimental and Theoretical Artificial Intellige 4 (3):185 - 211.   (Cited by 123 | Google | More links)
Abstract: High-level perception--”the process of making sense of complex data at an abstract, conceptual level--”is fundamental to human cognition. Through high-level perception, chaotic environmen- tal stimuli are organized into the mental representations that are used throughout cognitive pro- cessing. Much work in traditional artificial intelligence has ignored the process of high-level perception, by starting with hand-coded representations. In this paper, we argue that this dis- missal of perceptual processes leads to distorted models of human cognition. We examine some existing artificial-intelligence models--”notably BACON, a model of scientific discovery, and the Structure-Mapping Engine, a model of analogical thought--”and argue that these are flawed pre- cisely because they downplay the role of high-level perception. Further, we argue that perceptu- al processes cannot be separated from other cognitive processes even in principle, and therefore that traditional artificial-intelligence models cannot be defended by supposing the existence of a --œrepresentation module--� that supplies representations ready-made. Finally, we describe a model of high-level perception and analogical thought in which perceptual processing is integrated with analogical mapping, leading to the flexible build-up of representations appropriate to a given context
Dartnall, Terry (2000). Reverse psychologism, cognition and content. Minds and Machines 10 (1):31-52.   (Cited by 32 | Google | More links)
Abstract:   The confusion between cognitive states and the content of cognitive states that gives rise to psychologism also gives rise to reverse psychologism. Weak reverse psychologism says that we can study cognitive states by studying content – for instance, that we can study the mind by studying linguistics or logic. This attitude is endemic in cognitive science and linguistic theory. Strong reverse psychologism says that we can generate cognitive states by giving computers representations that express the content of cognitive states and that play a role in causing appropriate behaviour. This gives us strong representational, classical AI (REPSCAI), and I argue that it cannot succeed. This is not, as Searle claims in his Chinese Room Argument, because syntactic manipulation cannot generate content. Syntactic manipulation can generate content, and this is abundantly clear in the Chinese Room scenano. REPSCAI cannot succeed because inner content is not sufficient for cognition, even when the representations that carry the content play a role in generating appropriate behaviour
Dietrich, Eric (1988). Computers, intentionality, and the new dualism. Computers and Philosophy Newsletter.   (Google)
Dreyfus, Hubert L. (1979). A framework for misrepresenting knowledge. In Martin Ringle (ed.), Philosophical Perspectives in Artificial Intelligence. Humanities Press.   (Cited by 7 | Annotation | Google)
Echavarria, Ricardo Restrepo (2009). Russell's structuralism and the supposed death of computational cognitive science. Minds and Machines 19 (2).   (Google)
Abstract: John Searle believes that computational properties are purely formal and that consequently, computational properties are not intrinsic, empirically discoverable, nor causal; and therefore, that an entity’s having certain computational properties could not be sufficient for its having certain mental properties. To make his case, Searle’s employs an argument that had been used before him by Max Newman, against Russell’s structuralism; one that Russell himself considered fatal to his own position. This paper formulates a not-so-explored version of Searle’s problem with computational cognitive science, and refutes it by suggesting how our understanding of computation is far from implying the structuralism Searle vitally attributes to it. On the way, I formulate and argue for a thesis that strengthens Newman’s case against Russell’s structuralism, and thus raises the apparent risk for computational cognitive science too
Fields, Christopher A. (1994). Real machines and virtual intentionality: An experimentalist takes on the problem of representational content. In Eric Dietrich (ed.), Thinking Computers and Virtual Persons. Academic Press.   (Google)
Franklin, James, The representation of context: Ideas from artificial intelligence.   (Google)
Abstract: To move beyond vague platitudes about the importance of context in legal reasoning or natural language understanding, one must take account of ideas from artificial intelligence on how to represent context formally. Work on topics like prior probabilities, the theory-ladenness of observation, encyclopedic knowledge for disambiguation in language translation and pathology test diagnosis has produced a body of knowledge on how to represent context in artificial intelligence applications
Fulda, Joseph S. (2000). The logic of “improper cross”. Artificial Intelligence and Law 8 (4):337-341.   (Google)
Garzon, Francisco Calvo & Rodriguez, Angel Garcia (2009). Where is cognitive science heading? Minds and Machines.   (Google)
Abstract: According to Ramsey (Representation reconsidered, Cambridge University Press, New York, 2007), only classical cognitive science, with the related notions of input–output and structural representations, meets the job description challenge (the challenge to show that a certain structure or process serves a representational role at the subpersonal level). By contrast, connectionism and other nonclassical models, insofar as they exploit receptor and tacit notions of representation, are not genuinely representational. As a result, Ramsey submits, cognitive science is taking a U-turn from representationalism back to behaviourism, thus presupposing that (1) the emergence of cognitivism capitalized on the concept of representation, and that (2) the materialization of nonclassical cognitive science involves a return to some form of pre-cognitivist behaviourism. We argue against both (1) and (2), by questioning Ramsey’s divide between classical and representational, versus nonclassical and nonrepresentational, cognitive models. For, firstly, connectionist and other nonclassical accounts have the resources to exploit the notion of a structural isomorphism, like classical accounts (the beefing-up strategy); and, secondly, insofar as input–output and structural representations refer to a cognitive agent, classical explanations fail to meet the job description challenge (the deflationary strategy). Both strategies work independently of each other: if the deflationary strategy succeeds, contra (1), cognitivism has failed to capitalize on the relevant concept of representation; if the beefing-up strategy is sound, contra (2), the return to a pre-cognitivist era cancels out.
Guvenir, Halil A. & Akman, Varol (1992). Problem representation for refinement. Minds and Machines 2 (3):267-282.   (Google | More links)
Abstract:   In this paper we attempt to develop a problem representation technique which enables the decomposition of a problem into subproblems such that their solution in sequence constitutes a strategy for solving the problem. An important issue here is that the subproblems generated should be easier than the main problem. We propose to represent a set of problem states by a statement which is true for all the members of the set. A statement itself is just a set of atomic statements which are binary predicates on state variables. Then, the statement representing the set of goal states can be partitioned into its subsets each of which becomes a subgoal of the resulting strategy. The techniques involved in partitioning a goal into its subgoals are presented with examples
Haugeland, John (1981). Semantic engines: An introduction to mind design. In J. Haugel (ed.), Mind Design. MIT Press.   (Cited by 92 | Google)
Marsh, Leslie (2005). Review Essay: Andy Clark's Natural-Born Cyborgs: Minds, Technologies, and the Future of Human Intelligence_. Cognitive Systems Research 6:405-409.   (Google)
Abstract: The notion of the cyborg has exercised the popular imagination for almost two hundred years. In very general terms the idea that a living entity can be a hybrid of both organic matter and mechanical parts, and for all intents and purposes be seamlessly functional and self-regulating, was prefigured in literary works such as Shellys Frankenstein (1816/18) and Samuel Butlers Erewhon (1872). This notion of hybridism has been a staple theme of 20th century science fiction writing, television programmes and the cinema. For the most part, these works trade on a deep sense of unease we have about our personal identity – how could some non-organic matter to which I have so little conscious access count as a bona fide part of me? Cognitive scientist and philosopher, Andy Clark, picks up this general theme and presents an empirical and philosophical case for the following inextricably linked theses.
Prem, Erich (2000). Changes of representational AI concepts induced by embodied autonomy. Communication and Cognition-Artificial Intelligence 17 (3-4):189-208.   (Cited by 4 | Google)
Robinson, William S. (1995). Direct representation. Philosophical Studies 80 (3):305-22.   (Cited by 3 | Annotation | Google | More links)
Shani, Itay (2005). Computation and intentionality: A recipe for epistemic impasse. Minds and Machines 15 (2):207-228.   (Cited by 1 | Google | More links)
Abstract: Searle’s celebrated Chinese room thought experiment was devised as an attempted refutation of the view that appropriately programmed digital computers literally are the possessors of genuine mental states. A standard reply to Searle, known as the “robot reply” (which, I argue, reflects the dominant approach to the problem of content in contemporary philosophy of mind), consists of the claim that the problem he raises can be solved by supplementing the computational device with some “appropriate” environmental hookups. I argue that not only does Searle himself casts doubt on the adequacy of this idea by applying to it a slightly revised version of his original argument, but that the weakness of this encoding-based approach to the problem of intentionality can also be exposed from a somewhat different angle. Capitalizing on the work of several authors and, in particular, on that of psychologist Mark Bickhard, I argue that the existence of symbol-world correspondence is not a property that the cognitive system itself can appreciate, from its own perspective, by interacting with the symbol and therefore, not a property that can constitute intrinsic content. The foundational crisis to which Searle alluded is, I conclude, very much alive
Stanley, Jason (2005). Review of Robyn Carston, Thoughts and Utterances. Mind and Language 20 (3).   (Google)
Abstract: Relevance Theory is the influential theory of linguistic interpretation first championed by Dan Sperber and Deirdre Wilson. Relevance theorists have made important contributions to our understanding of a wide range of constructions, especially constructions that tend to receive less attention in semantics and philosophy of language. But advocates of Relevance Theory also have had a tendency to form a rather closed community, with an unwillingness to translate their own special vocabulary and distinctions into more neutral vernacular. Since Robyn Carston has long been the advocate of Relevance Theory most able to communicate with a broader philosophical and linguistic audience, it is with particular interest that the emergence of her long-awaited volume, Thoughts and Utterances has been greeted. The volume exhibits many of the strengths, but also some of the weaknesses, of this well-known program
Thornton, Chris (1997). Brave mobots use representation: Emergence of representation in fight-or-flight learning. Minds and Machines 7 (4):475-494.   (Cited by 10 | Google | More links)
Abstract:   The paper uses ideas from Machine Learning, Artificial Intelligence and Genetic Algorithms to provide a model of the development of a fight-or-flight response in a simulated agent. The modelled development process involves (simulated) processes of evolution, learning and representation development. The main value of the model is that it provides an illustration of how simple learning processes may lead to the formation of structures which can be given a representational interpretation. It also shows how these may form the infrastructure for closely-coupled agent/environment interaction