Javascript Menu by
MindPapers is now part of PhilPapers: online research in philosophy, a new service with many more features.
 Compiled by David Chalmers (Editor) & David Bourget (Assistant Editor), Australian National University. Submit an entry.
click here for help on how to search

6.2b. Computational Semantics (Computational Semantics on PhilPapers)

See also:
Akman, Varol (1998). Situations and artificial intelligence. Minds and Machines 8 (4):475-477.   (Google)
Blackburn, Patrick & Bos, Johan (2003). Computational semantics. Theoria: Revista de Teoría, Historia y Fundamentos de la Ciencia 18 (1):27-45.   (Google)
Abstract: In this article we discuss what constitutes a good choice of semantic representation, compare different approaches of constructing semantic representations for fragments of natural language, and give an overview of recent methods for employing inference engines for natural language understanding tasks
Blackburn, Patrick & Kohlhase, Michael (2004). Inference and computational semantics. Journal of Logic, Language and Information 13 (2).   (Google)
Blackburn, Patrick (2005). Representation and Inference for Natural Language: A First Course in Computational Semantics. Center for the Study of Language and Information.   (Google)
Abstract: How can computers distinguish the coherent from the unintelligible, recognize new information in a sentence, or draw inferences from a natural language passage? Computational semantics is an exciting new field that seeks answers to these questions, and this volume is the first textbook wholly devoted to this growing subdiscipline. The book explains the underlying theoretical issues and fundamental techniques for computing semantic representations for fragments of natural language. This volume will be an essential text for computer scientists, linguists, and anyone interested in the development of computational semantics
Bogdan, Radu J. (1994). By way of means and ends. In Radu J. Bogdan (ed.), Grounds for Cognition. Lawrence Erlbaum.   (Google)
Abstract: This chapter provides the teleological foundations for our analysis of guidance to goal. Its objective is to ground goal-directedness genetically. The basic suggestion is this. Organisms are small things, with few energy resources and puny physical means, battling a ruthless physical and biological nature. How do they manage to survive and multiply? CLEVERLY, BY ORGANIZING
Bos, Johan (2004). Computational semantics in discourse: Underspecification, resolution, and inference. Journal of Logic, Language and Information 13 (2).   (Google)
Abstract: In this paper I introduce a formalism for natural language understandingbased on a computational implementation of Discourse RepresentationTheory. The formalism covers a wide variety of semantic phenomena(including scope and lexical ambiguities, anaphora and presupposition),is computationally attractive, and has a genuine inference component. Itcombines a well-established linguistic formalism (DRT) with advancedtechniques to deal with ambiguity (underspecification), and isinnovative in the use of first-order theorem proving techniques.The architecture of the formalism for natural language understandingthat I advocate consists of three levels of processing:underspecification, resolution, andinference. Each of these levels has a distinct function andtherefore employs a different kind of semantic representation. Themappings between these different representations define the interfacesbetween the levels
Charniak, Eugene & Wilks, Yorick (eds.) (1976). Computational Semantics: An Introduction to Artificial Intelligence and Natural Language Comprehension. Distributors for the U.S.A. And Canada, Elsevier/North Holland.   (Google)
Szymanik, Jakub & Zajenkowski, Marcin (2009). Comprehension of Simple Quantifiers. Empirical Evaluation of a Computational Model. Cognitive Science: A Multidisciplinary Journal 34 (3):521-532.   (Google)
Abstract: We examine the verification of simple quantifiers in natural language from a computational model perspective. We refer to previous neuropsychological investigations of the same problem and suggest extending their experimental setting. Moreover, we give some direct empirical evidence linking computational complexity predictions with cognitive reality.
In the empirical study we compare time needed for understanding different types of quantifiers. We show that the computational distinction between quantifiers recognized by finite-automata and push-down automata is psychologically relevant. Our research improves upon hypothesis and explanatory power of recent neuroimaging studies as well as provides
Dennett, Daniel C. (2003). The Baldwin Effect: A Crane, Not a Skyhook. In Bruce H. Weber & D.J. Depew (eds.), And Learning: The Baldwin Effect Reconsidered. MIT Press.   (Cited by 6 | Google | More links)
Abstract: In 1991, I included a brief discussion of the Baldwin effect in my account of the evolution of human consciousness, thinking I was introducing to non-specialist readers a little-appreciated, but no longer controversial, wrinkle in orthodox neo-Darwinism. I had thought that Hinton and Nowlan (1987) and Maynard Smith (1987) had shown clearly and succinctly how and why it worked, and restored the neglected concept to grace. Here is how I put it then
Fodor, Jerry A. (1979). In reply to Philip Johnson-Laird's What's Wrong with Grandma's Guide to Procedural Semantics: A Reply to Jerry Fodor. Cognition 7 (March):93-95.   (Google)
Fodor, Jerry A. (1978). Tom swift and his procedural grandmother. Cognition 6 (September):229-47.   (Cited by 24 | Annotation | Google)
Hadley, Robert F. (1990). Truth conditions and procedural semantics. In Philip P. Hanson (ed.), Information, Language and Cognition. University of British Columbia Press.   (Cited by 2 | Google)
Harnad, Stevan (2002). Darwin, Skinner, Turing and the mind. Magyar Pszichologiai Szemle 57 (4):521-528.   (Google | More links)
Abstract: Darwin differs from Newton and Einstein in that his ideas do not require a complicated or deep mind to understand them, and perhaps did not even require such a mind in order to generate them in the first place. It can be explained to any school-child (as Newtonian mechanics and Einsteinian relativity cannot) that living creatures are just Darwinian survival/reproduction machines. They have whatever structure they have through a combination of chance and its consequences: Chance causes changes in the genetic blueprint from which organisms' bodies are built, and if those changes are more successful in helping their owners survive and reproduce than their predecessors or their rivals, then, by definition, those changes are reproduced, and thereby become more prevalent in succeeding generations: Whatever survives/reproduces better survives/reproduces better. That is the tautological force that shaped us
Johnson-Laird, Philip N. (1977). Procedural semantics. Cognition 5:189-214.   (Cited by 37 | Google)
Johnson-Laird, Philip N. (1978). What's wrong with grandma's guide to procedural semantics: A reply to Jerry Fodor. Cognition 9 (September):249-61.   (Cited by 1 | Google)
McDermott, Drew (1978). Tarskian semantics, or no notation without denotation. Cognitive Science 2:277-82.   (Cited by 33 | Annotation | Google | More links)
Papineau, David (2006). The cultural origins of cognitive adaptations. Royal Institute of Philosophy Supplement.   (Google | More links)
Abstract: According to an influential view in contemporary cognitive science, many human cognitive capacities are innate. The primary support for this view comes from ‘poverty of stimulus’ arguments. In general outline, such arguments contrast the meagre informational input to cognitive development with its rich informational output. Consider the ease with which humans acquire languages, become facile at attributing psychological states (‘folk psychology’), gain knowledge of biological kinds (‘folk biology’), or come to understand basic physical processes (‘folk physics’). In all these cases, the evidence available to a growing child is far too thin and noisy for it to be plausible that the underlying principles involved are derived from general learning mechanisms. This only alternative hypothesis seems to be that the child’s grasp of these principles is innate. (Cf. Laurence and Margolis, 2001.)
Perlis, Donald R. (1991). Putting one's foot in one's head -- part 1: Why. Noûs 25 (September):435-55.   (Cited by 12 | Google | More links)
Perlis, Donald R. (1994). Putting one's foot in one's head -- part 2: How. In Eric Dietrich (ed.), Thinking Computers and Virtual Persons. Academic Press.   (Google)
Rapaport, William J. (1988). Syntactic semantics: Foundations of computational natural language understanding. In James H. Fetzer (ed.), Aspects of AI. Kluwer.   (Cited by 44 | Google)
Rapaport, William J. (1995). Understanding understanding: Syntactic semantics and computational cognition. Philosophical Perspectives 9:49-88.   (Cited by 22 | Google | More links)
Smith, B. (1988). On the semantics of clocks. In James H. Fetzer (ed.), Aspects of AI. Kluwer.   (Cited by 7 | Google)
Smith, B. (1987). The correspondence continuum. Csli 87.   (Cited by 34 | Google)
Szymanik, Jakub & Zajenkowski, Marcin (2009). Understanding Quantifiers in Language. In N. A. Taatgen & H. van Rijn (eds.), Proceedings of the 31st Annual Conference of the Cognitive Science Society.   (Google)
Abstract: We compare time needed for understanding different types of quantifiers. We show that the computational distinction between quantifiers recognized by finite-automata and pushdown automata is psychologically relevant. Our research improves upon hypothesis and explanatory power of recent neuroimaging studies as well as provides evidence for the claim that human linguistic abilities are constrained by computational complexity.
Tin, Erkan & Akman, Varol (1994). Computational situation theory. ACM SIGART Bulletin 5 (4):4-17.   (Cited by 15 | Google | More links)
Abstract: Situation theory has been developed over the last decade and various versions of the theory have been applied to a number of linguistic issues. However, not much work has been done in regard to its computational aspects. In this paper, we review the existing approaches towards `computational situation theory' with considerable emphasis on our own research
Wilks, Y. (1990). Form and content in semantics. Synthese 82 (3):329-51.   (Cited by 10 | Annotation | Google | More links)
Abstract:   This paper continues a strain of intellectual complaint against the presumptions of certain kinds of formal semantics (the qualification is important) and their bad effects on those areas of artificial intelligence concerned with machine understanding of human language. After some discussion of the use of the term epistemology in artificial intelligence, the paper takes as a case study the various positions held by McDermott on these issues and concludes, reluctantly, that, although he has reversed himself on the issue, there was no time at which he was right
Wilks, Y. (1982). Some thoughts on procedural semantics. In W. Lehnert (ed.), Strategies for Natural Language Processing. Lawrence Erlbaum.   (Cited by 12 | Google)
Winograd, Terry (1985). Moving the semantic fulcrum. Linguistics and Philosophy 8 (February):91-104.   (Cited by 16 | Google | More links)
Woods, W. (1986). Problems in procedural semantics. In Zenon W. Pylyshyn & W. Demopolous (eds.), Meaning and Cognitive Structure. Ablex.   (Cited by 2 | Annotation | Google)
Woods, W. (1981). Procedural semantics as a theory of meaning. In A. Joshi, Bruce H. Weber & Ivan A. Sag (eds.), Elements of Discourse Understanding. Cambridge University Press.   (Cited by 33 | Google)