Javascript Menu by Deluxe-Menu.com
MindPapers is now part of PhilPapers: online research in philosophy, a new service with many more features.
 
 Compiled by David Chalmers (Editor) & David Bourget (Assistant Editor), Australian National University. Submit an entry.
 
   
click here for help on how to search

6.1b. Godelian arguments (Godelian arguments on PhilPapers)

See also:
Benacerraf, Paul (1967). God, the devil, and Godel. The Monist 51 (January):9-32.   (Annotation | Google)
Bojadziev, Damjan (1997). Mind versus Godel. In Matjaz Gams & M. Wu Paprzycki (eds.), Mind Versus Computer. IOS Press.   (Cited by 1 | Google | More links)
Bowie, G. Lee (1982). Lucas' number is finally up. Journal of Philosophy Logic 11 (August):279-85.   (Cited by 10 | Annotation | Google | More links)
Boyer, David L. (1983). R. Lucas, Kurt Godel, and Fred astaire. Philosophical Quarterly 33 (April):147-59.   (Annotation | Google | More links)
Bringsjord, Selmer & Xiao, H. (2000). A refutation of Penrose's new Godelian case against the computational conception of mind. Journal of Experimental and Theoretical Artificial Intelligence 12.   (Google)
Chari, C. T. K. (1963). Further comments on minds, machines and Godel. Philosophy 38 (April):175-8.   (Annotation | Google)
Chalmers, David J. (1996). Minds, machines, and mathematics. Psyche 2:11-20.   (Cited by 17 | Google | More links)
Abstract: In his stimulating book SHADOWS OF THE MIND, Roger Penrose presents arguments, based on Gödel's theorem, for the conclusion that human thought is uncomputable. There are actually two separate arguments in Penrose's book. The second has been widely ignored, but seems to me to be much more interesting and novel than the first. I will address both forms of the argument in some detail. Toward the end, I will also comment on Penrose's proposals for a "new science of consciousness"
Chihara, C. (1972). On alleged refutations of mechanism using Godel's incompleteness results. Journal of Philosophy 69 (September):507-26.   (Cited by 9 | Annotation | Google | More links)
Coder, David (1969). Godel's theorem and mechanism. Philosophy 44 (September):234-7.   (Annotation | Google)
Copeland, Jack (1998). Turing's o-machines, Searle, Penrose, and the brain. Analysis 58 (2):128-138.   (Cited by 15 | Google | More links)
Abstract: In his PhD thesis (1938) Turing introduced what he described as 'a new kind of machine'. He called these 'O-machines'. The present paper employs Turing's concept against a number of currently fashionable positions in the philosophy of mind
Dennett, Daniel C. (1989). Murmurs in the cathedral: Review of R. Penrose, The Emperor's New Mind. Times Literary Supplement (September) 29.   (Cited by 5 | Google)
Abstract: The idea that a computer could be conscious--or equivalently, that human consciousness is the effect of some complex computation mechanically performed by our brains--strikes some scientists and philosophers as a beautiful idea. They find it initially surprising and unsettling, as all beautiful ideas are, but the inevitable culmination of the scientific advances that have gradually demystified and unified the material world. The ideologues of Artificial Intelligence (AI) have been its most articulate supporters. To others, this idea is deeply repellent: philistine, reductionistic (in some bad sense), as incredible as it is offensive. John Searle's attack on "strong AI" is the best known expression of this view, but others in the same camp, liking Searle's destination better than his route, would dearly love to see a principled, scientific argument showing that strong AI is impossible. Roger Penrose has set out to provide just such an argument
Dennett, Daniel C. (1978). The abilities of men and machines. In Brainstorms. MIT Press.   (Cited by 3 | Annotation | Google)
Edis, Taner (1998). How Godel's theorem supports the possibility of machine intelligence. Minds and Machines 8 (2):251-262.   (Google | More links)
Abstract:   Gödel's Theorem is often used in arguments against machine intelligence, suggesting humans are not bound by the rules of any formal system. However, Gödelian arguments can be used to support AI, provided we extend our notion of computation to include devices incorporating random number generators. A complete description scheme can be given for integer functions, by which nonalgorithmic functions are shown to be partly random. Not being restricted to algorithms can be accounted for by the availability of an arbitrary random function. Humans, then, might not be rule-bound, but Gödelian arguments also suggest how the relevant sort of nonalgorithmicity may be trivially made available to machines
Feferman, S. (1996). Penrose's Godelian argument. Psyche 2:21-32.   (Google)
Abstract: In his book Shadows of the Mind: A search for the missing science of con- sciousness [SM below], Roger Penrose has turned in another bravura perfor- mance, the kind we have come to expect ever since The Emperor’s New Mind [ENM ] appeared. In the service of advancing his deep convictions and daring conjectures about the nature of human thought and consciousness, Penrose has once more drawn a wide swath through such topics as logic, computa- tion, artificial intelligence, quantum physics and the neuro-physiology of the brain, and has produced along the way many gems of exposition of difficult mathematical and scientific ideas, without condescension, yet which should be broadly appealing.1 While the aims and a number of the topics in SM are the same as in ENM , the focus now is much more on the two axes that Pen- rose grinds in earnest. Namely, in the first part of SM he argues anew and at great length against computational models of the mind and more specifi- cally against any account of mathematical thought in computational terms. Then in the second part, he argues that there must be a scientific account of consciousness but that will require a (still to be found) non-computational extension or modification of present-day quantum physics
Gaifman, H. (2000). What Godel's incompleteness result does and does not show. Journal of Philosophy 97 (8):462-471.   (Cited by 3 | Google | More links)
Abstract: In a recent paper S. McCall adds another link to a chain of attempts to enlist Gödel’s incompleteness result as an argument for the thesis that human reasoning cannot be construed as being carried out by a computer.1 McCall’s paper is undermined by a technical oversight. My concern however is not with the technical point. The argument from Gödel’s result to the no-computer thesis can be made without following McCall’s route; it is then straighter and more forceful. Yet the argument fails in an interesting and revealing way. And it leaves a remainder: if some computer does in fact simulate all our mathematical reasoning, then, in principle, we cannot fully grasp how it works. Gödel’s result also points out a certain essential limitation of self-reflection. The resulting picture parallels, not accidentally, Davidson’s view of psychology, as a science that in principle must remain “imprecise”, not fully spelt out. What is intended here by “fully grasp”, and how all this is related to self-reflection, will become clear at the end of this comment
George, A. & Velleman, Daniel J. (2000). Leveling the playing field between mind and machine: A reply to McCall. Journal of Philosophy 97 (8):456-452.   (Cited by 3 | Google | More links)
George, F. H. (1962). Minds, machines and Godel: Another reply to mr. Lucas. Philosophy 37 (January):62-63.   (Annotation | Google)
Gertler, Brie (2004). Simulation theory on conceptual grounds. Protosociology 20:261-284.   (Google)
Abstract: I will present a conceptual argument for a simulationist answer to (2). Given that our conception of mental states is employed in attributing mental states to others, a simulationist answer to (2) supports a simulationist answer to (1). I will not address question (3). Answers to (1) and (2) do not yield an answer to (3), since (1) and (2) concern only our actual practices and concepts. For instance, an error theory about (1) and (2) would say that our practices and concepts manifest a mistaken view about the real nature of the mental. Finally, I will not address question (2a), which is an empirical question and so is not immediately relevant to the conceptual argument that is of concern here
Good, I. J. (1969). Godel's theorem is a red Herring. British Journal for the Philosophy of Science 19 (February):357-8.   (Cited by 8 | Annotation | Google | More links)
Good, I. J. (1967). Human and machine logic. British Journal for the Philosophy of Science 18 (August):145-6.   (Cited by 7 | Annotation | Google | More links)
Gordon, Robert M. (online). Folk Psychology As Mental Simulation. Stanford Encyclopedia of Philosophy.   (Cited by 8 | Google)
Abstract: by, or is otherwise relevant to the seminar "Folk Psychology vs. Mental Simulation: How Minds Understand Minds," a National
Grush, Rick & Churchland, P. (1995). Gaps in Penrose's toiling. In Thomas Metzinger (ed.), Conscious Experience. Ferdinand Schoningh.   (Google | More links)
Abstract: Using the Gödel Incompleteness Result for leverage, Roger Penrose has argued that the mechanism for consciousness involves quantum gravitational phenomena, acting through microtubules in neurons. We show that this hypothesis is implausible. First, the Gödel Result does not imply that human thought is in fact non algorithmic. Second, whether or not non algorithmic quantum gravitational phenomena actually exist, and if they did how that could conceivably implicate microtubules, and if microtubules were involved, how that could conceivably implicate consciousness, is entirely speculative. Third, cytoplasmic ions such as calcium and sodium are almost certainly present in the microtubule pore, barring the quantum mechanical effects Penrose envisages. Finally, physiological evidence indicates that consciousness does not directly depend on microtubule properties in any case, rendering doubtful any theory according to which consciousness is generated in the microtubules
Hadley, Robert F. (1987). Godel, Lucas, and mechanical models of mind. Computational Intelligence 3:57-63.   (Cited by 1 | Annotation | Google | More links)
Hanson, William H. (1971). Mechanism and Godel's theorem. British Journal for the Philosophy of Science 22 (February):9-16.   (Annotation | Google | More links)
Hofstadter, Douglas R. (1979). Godel, Escher, Bach: An Eternal Golden Braid. Basic Books.   (Cited by 65 | Annotation | Google | More links)
Hutton, A. (1976). This Godel is killing me. Philosophia 3 (March):135-44.   (Annotation | Google)
Irvine, Andrew D. (1983). Lucas, Lewis, and mechanism -- one more time. Analysis 43 (March):94-98.   (Annotation | Google)
Jacquette, Dale (1987). Metamathematical criteria for minds and machines. Erkenntnis 27 (July):1-16.   (Cited by 3 | Annotation | Google | More links)
Ketland, Jeffrey & Raatikainen, Panu (online). Truth and provability again.   (Google)
King, D. (1996). Is the human mind a Turing machine? Synthese 108 (3):379-89.   (Google | More links)
Abstract:   In this paper I discuss the topics of mechanism and algorithmicity. I emphasise that a characterisation of algorithmicity such as the Turing machine is iterative; and I argue that if the human mind can solve problems that no Turing machine can, the mind must depend on some non-iterative principle — in fact, Cantor's second principle of generation, a principle of the actual infinite rather than the potential infinite of Turing machines. But as there has been theorisation that all physical systems can be represented by Turing machines, I investigate claims that seem to contradict this: specifically, claims that there are noncomputable phenomena. One conclusion I reach is that if it is believed that the human mind is more than a Turing machine, a belief in a kind of Cartesian dualist gulf between the mental and the physical is concomitant
Kirk, Robert E. (1986). Mental machinery and Godel. Synthese 66 (March):437-452.   (Annotation | Google)
Laforte, Geoffrey; Hayes, Pat & Ford, Kenneth M. (1998). Why Godel's theorem cannot refute computationalism: A reply to Penrose. Artificial Intelligence 104.   (Google)
Leslie, Alan M.; Nichols, Shaun; Stich, Stephen P. & Klein, David B. (1996). Varieties of off-line simulation. In P. Carruthers & P. Smith (eds.), Theories of Theories of Mind. Cambridge University Press.   (Google)
Abstract: In the last few years, off-line simulation has become an increasingly important alternative to standard explanations in cognitive science. The contemporary debate began with Gordon (1986) and Goldman's (1989) off-line simulation account of our capacity to predict behavior. On their view, in predicting people's behavior we take our own decision making system `off line' and supply it with the `pretend' beliefs and desires of the person whose behavior we are trying to predict; we then let the decision maker reach a decision on the basis of these pretend inputs. Figure 1 offers a `boxological' version of the off-line simulation theory of behavior prediction.(1)
Lewis, David (1969). Lucas against mechanism. Philosophy 44 (June):231-3.   (Cited by 10 | Annotation | Google)
Lewis, David (1979). Lucas against mechanism II. Canadian Journal of Philosophy 9 (June):373-6.   (Cited by 7 | Annotation | Google)
Lindstrom, Per (2006). Remarks on Penrose's new argument. Journal of Philosophical Logic 35 (3):231-237.   (Google | More links)
Abstract: It is commonly agreed that the well-known Lucas–Penrose arguments and even Penrose’s ‘new argument’ in [Penrose, R. (1994): Shadows of the Mind, Oxford University Press] are inconclusive. It is, perhaps, less clear exactly why at least the latter is inconclusive. This note continues the discussion in [Lindström, P. (2001): Penrose’s new argument, J. Philos. Logic 30, 241–250; Shapiro, S.(2003): Mechanism, truth, and Penrose’s new argument, J. Philos. Logic 32, 19–42] and elsewhere of this question
Lucas, John R. (1967). Human and machine logic: A rejoinder. British Journal for the Philosophy of Science 19 (August):155-6.   (Cited by 3 | Annotation | Google | More links)
Abstract: We can imagine a human operator playing a game of one-upmanship against a programmed computer. If the program is Fn, the human operator can print the theorem Gn, which the programmed computer, or, if you prefer, the program, would never print, if it is consistent. This is true for each whole number n, but the victory is a hollow one since a second computer, loaded with program C, could put the human operator out of a job.... It is useless for the `mentalist' to argue that any given program can always be improves since the process for improving programs can presumably be programmed also; certainly this can be done if the mentalist describes how the improvement is to be made. If he does give such a description, then he has not made a case
Lucas, John R. (1984). Lucas against mechanism II: A rejoinder. Canadian Journal of Philosophy 14 (June):189-91.   (Cited by 2 | Annotation | Google)
Lucas, John R. (1970). Mechanism: A rejoinder. Philosophy 45 (April):149-51.   (Annotation | Google)
Lucas, John R. (1971). Metamathematics and the philosophy of mind: A rejoinder. Philosophy of Science 38 (2):310-13.   (Cited by 4 | Google | More links)
Lucas, John R. (1961). Minds, machines and Godel. Philosophy 36 (April-July):112-127.   (Cited by 72 | Annotation | Google | More links)
Abstract: Goedel's theorem states that in any consistent system which is strong enough to produce simple arithmetic there are formulae which cannot be proved-in-the-system, but which we can see to be true. Essentially, we consider the formula which says, in effect, "This formula is unprovable-in-the-system". If this formula were provable-in-the-system, we should have a contradiction: for if it were provablein-the-system, then it would not be unprovable-in-the-system, so that "This formula is unprovable-in-the-system" would be false: equally, if it were provable-in-the-system, then it would not be false, but would be true, since in any consistent system nothing false can be provedin-the-system, but only truths. So the formula "This formula is unprovable-in-the-system" is not provable-in-the-system, but unprovablein-the-system. Further, if the formula "This formula is unprovablein- the-system" is unprovable-in-the-system, then it is true that that formula is unprovable-in-the-system, that is, "This formula is unprovable-in-the-system" is true. Goedel's theorem must apply to cybernetical machines, because it is of the essence of being a machine, that it should be a concrete instantiation of a formal system. It follows that given any machine which is consistent and capable of doing simple arithmetic, there is a formula which it is incapable of producing as being true---i.e., the formula is unprovable-in-the-system-but which we can see to be true. It follows that no machine can be a complete or adequate model of the mind, that minds are essentially different from machines
Lucas, John R. (1996). Mind, machines and Godel: A retrospect. In Peter Millican & A. Clark (eds.), Machines and Thought. Oxford University Press.   (Annotation | Google)
Lucas, John R. (1968). Satan stultified: A rejoinder to Paul Benacerraf. The Monist 52 (1):145-58.   (Cited by 10 | Annotation | Google)
Abstract: The argument is a dialectical one. It is not a direct proof that the mind is something more than a machine, but a schema of disproof for any particular version of mechanism that may be put forward. If the mechanist maintains any specific thesis, I show that [146] a contradiction ensues. But only if. It depends on the mechanist making the first move and putting forward his claim for inspection. I do not think Benacerraf has quite taken the point. He criticizes me both for "failing to notice" that my ability to show that the Gödel sentence of a formal system is true "depends very much on how he is given
Lucas, John R. & Redhead, Michael (2007). Truth and provability. British Journal for the Philosophy of Science 58 (2):331-2.   (Google | More links)
Abstract: The views of Redhead ([2004]) are defended against the argument by Panu Raatikainen ([2005]). The importance of informal rigour is canvassed, and the argument for the a priori nature of induction is explained. The significance of Gödel's theorem is again rehearsed
Lucas, John R. (1970). The Freedom of the Will. Oxford University Press.   (Cited by 22 | Google)
Abstract: It might be the case that absence of constraint is the relevant sense of ' freedom' when we are discussing the freedom of the will, but it needs arguing for. ...
Lucas, John R. (ms). The Godelian argument: Turn over the page.   (Cited by 3 | Google)
Abstract: I have no quarrel with the first two sentences: but the third, though charitable and courteous, is quite untrue. Although there are criticisms which can be levelled against the Gödelian argument, most of the critics have not read either of my, or either of Penrose's, expositions carefully, and seek to refute arguments we never put forward, or else propose as a fatal objection one that had already been considered and countered in our expositions of the argument. Hence my title. The Gödelian Argument uses Gödel's theorem to show that minds cannot be explained in purely mechanist terms. It has been put forward, in different forms, by Gödel himself, by Penrose, and by me
Lucas, John R. (1976). This Godel is killing me: A rejoinder. Philosophia 6 (March):145-8.   (Annotation | Google)
Lucas, John R. (ms). The implications of Godel's theorem.   (Google | More links)
Abstract: In 1931 Kurt Gödel proved two theorems about the completeness and consistency of first-order arithmetic. Their implications for philosophy are profound. Many fashionable tenets are shown to be untenable: many traditional intuitions are vindicated by incontrovertible arguments
Lyngzeidetson, Albert E. & Solomon, Martin K. (1994). Abstract complexity theory and the mind-machine problem. British Journal for the Philosophy of Science 45 (2):549-54.   (Google | More links)
Abstract: In this paper we interpret a characterization of the Gödel speed-up phenomenon as providing support for the ‘Nagel-Newman thesis’ that human theorem recognizers differ from mechanical theorem recognizers in that the former do not seem to be limited by Gödel's incompleteness theorems whereas the latter do seem to be thus limited. However, we also maintain that (currently non-existent) programs which are open systems in that they continuously interact with, and are thus inseparable from, their environment, are not covered by the above (or probably any other recursion-theoretic) argument
Lyngzeidetson, Albert E. (1990). Massively parallel distributed processing and a computationalist foundation for cognitive science. British Journal for the Philosophy of Science 41 (March):121-127.   (Annotation | Google | More links)
Martin, J. & Engleman, K. (1990). The mind's I has two eyes. Philosophy 65 (264):510-515.   (Annotation | Google)
Maudlin, Tim (1996). Between the motion and the act. Psyche 2:40-51.   (Cited by 4 | Google | More links)
McCall, Storrs (1999). Can a Turing machine know that the Godel sentence is true? Journal of Philosophy 96 (10):525-32.   (Cited by 6 | Google | More links)
McCullough, D. (1996). Can humans escape Godel? Psyche 2:57-65.   (Google)
McCall, Storrs (2001). On "seeing" the truth of the Godel sentence. Facta Philosophica 3:25-30.   (Google)
McDermott, Drew (1996). [Star] Penrose is wrong. Psyche 2:66-82.   (Google)
Megill, Jason L. (2004). Are we paraconsistent? On the Lucas-Penrose argument and the computational theory of mind. Auslegung 27 (1):23-30.   (Google)
Nelson, E. (2002). Mathematics and the mind. In Kunio Yasue, Marj Jibu & Tarcisio Della Senta (eds.), No Matter, Never Mind. John Benjamins.   (Cited by 2 | Google | More links)
Penrose, Roger (1996). Beyond the doubting of a shadow. Psyche 2:89-129.   (Cited by 25 | Annotation | Google | More links)
Penrose, Roger (1990). Precis of the emperor's new mind. Behavioral and Brain Sciences 13:643-705.   (Annotation | Google)
Penrose, Roger (1994). Shadows of the Mind. Oxford University Press.   (Cited by 1412 | Google | More links)
Penrose, Roger (1992). Setting the scene: The claim and the issues. In D. Broadbent (ed.), The Simulation of Human Intelligence. Blackwell.   (Annotation | Google)
Penrose, Roger (1989). The Emperor's New Mind. Oxford University Press.   (Cited by 3 | Annotation | Google | More links)
Piccinini, Gualtiero (2003). Alan Turing and the mathematical objection. Minds and Machines 13 (1):23-48.   (Cited by 10 | Google | More links)
Abstract: This paper concerns Alan Turing’s ideas about machines, mathematical methods of proof, and intelligence. By the late 1930s, Kurt Gödel and other logicians, including Turing himself, had shown that no finite set of rules could be used to generate all true mathematical statements. Yet according to Turing, there was no upper bound to the number of mathematical truths provable by intelligent human beings, for they could invent new rules and methods of proof. So, the output of a human mathematician, for Turing, was not a computable sequence (i.e., one that could be generated by a Turing machine). Since computers only contained a finite number of instructions (or programs), one might argue, they could not reproduce human intelligence. Turing called this the “mathematical
objection” to his view that machines can think. Logico-mathematical reasons, stemming from his own work, helped to convince Turing that it should be possible to reproduce human intelligence, and eventually compete with it, by developing the appropriate kind of digital computer. He felt it
should be possible to program a computer so that it could learn or discover new rules, overcoming the limitations imposed by the incompleteness and undecidability results in the same way that human
mathematicians presumably do.
Priest, Graham (1994). Godel's theorem and the mind... Again. In M. Michael & John O'Leary-Hawthorne (eds.), Philosophy in Mind: The Place of Philosophy in the Study of Mind. Kluwer.   (Google)
Putnam, Hilary (1995). Review of Shadows of the Mind. AMS Bulletin 32 (3).   (Google)
Putnam, Hilary (1985). Reflexive reflections. Erkenntnis 22 (January):143-153.   (Cited by 8 | Annotation | Google | More links)
Raatikainen, Panu, McCall's gödelian argument is invalid.   (Google)
Abstract: Storrs McCall continues the tradition of Lucas and Penrose in an attempt to refute mechanism by appealing to Gödel’s incompleteness theorem (McCall 2001). That is, McCall argues that Gödel’s theorem “reveals a sharp dividing line between human and machine thinking”. According to McCall, “[h]uman beings are familiar with the distinction between truth and theoremhood, but Turing machines cannot look beyond their own output”. However, although McCall’s argumentation is slightly more sophisticated than the earlier Gödelian anti-mechanist arguments, in the end it fails badly, as it is at odds with the logical facts
Raatikainen, Panu (2005). On the philosophical relevance of gödel's incompleteness theorems. Revue Internationale de Philosophie 59 (4):513-534.   (Google)
Abstract: Gödel began his 1951 Gibbs Lecture by stating: “Research in the foundations of mathematics during the past few decades has produced some results which seem to me of interest, not only in themselves, but also with regard to their implications for the traditional philosophical problems about the nature of mathematics.” (Gödel 1951) Gödel is referring here especially to his own incompleteness theorems (Gödel 1931). Gödel’s first incompleteness theorem (as improved by Rosser (1936)) says that for any consistent formalized system F, which contains elementary arithmetic, there exists a sentence GF of the language of the system which is true but unprovable in that system. Gödel’s second incompleteness theorem states that no consistent formal system can prove its own consistency
Raatikainen, Panu (2005). Truth and provability: A comment on Redhead. British Journal for the Philosophy of Science 56 (3):611-613.   (Cited by 2 | Google | More links)
Abstract: Michael Redhead's recent argument aiming to show that humanly certifiable truth outruns provability is critically evaluated. It is argued that the argument is at odds with logical facts and fails
Raatikainen, Panu (ms). Truth and provability again.   (Google)
Abstract: Lucas and Redhead ([2007]) announce that they will defend the views of Redhead ([2004]) against the argument by Panu Raatikainen ([2005]). They certainly re-state the main claims of Redhead ([2004]), but they do not give any real arguments in their favour, and do not provide anything that would save Redhead’s argument from the serious problems pointed out in (Raatikainen [2005]). Instead, Lucas and Redhead make a number of seemingly irrelevant points, perhaps indicating a failure to understand the logico-mathematical points at issue
Redhead, M. (2004). Mathematics and the mind. British Journal for the Philosophy of Science 55 (4):731-737.   (Cited by 6 | Google | More links)
Abstract: Granted that truth is valuable we must recognize that certifiable truth is hard to come by, for example in the natural and social sciences. This paper examines the case of mathematics. As a result of the work of Gödel and Tarski we know that truth does not equate with proof. This has been used by Lucas and Penrose to argue that human minds can do things which digital computers can't, viz to know the truth of unprovable arithmetical statements. The argument is given a simple formulation in the context of sorites (Robinson) arithmetic, avoiding the complexities of formulating the Gödel sentence. The pros and cons of the argument are considered in relation to the conception of mathematical truth. * Paper contributed to the Conference entitled The Place of Value in a World of Facts, held at the LSE in October 2003
Robinson, William S. (1992). Penrose and mathematical ability. Analysis 52 (2):80-88.   (Annotation | Google)
Schurz, Gerhard (2002). McCall and Raatikainen on mechanism and incompleteness. Facta Philosophica 4:171-74.   (Google)
Seager, William E. (2003). Yesterday's algorithm: Penrose and the Godel argument. Croatian Journal of Philosophy 3 (9):265-273.   (Google)
Abstract: Roger Penrose is justly famous for his work in physics and mathematics but he is _notorious_ for his endorsement of the Gödel argument (see his 1989, 1994, 1997). This argument, first advanced by J. R. Lucas (in 1961), attempts to show that Gödel’s (first) incompleteness theorem can be seen to reveal that the human mind transcends all algorithmic models of it1. Penrose's version of the argument has been seen to fall victim to the original objections raised against Lucas (see Boolos (1990) and for a particularly intemperate review, Putnam (1994)). Yet I believe that more can and should be said about the argument. Only a brief review is necessary here although I wish to present the argument in a somewhat peculiar form
Slezak, Peter (1983). Descartes's diagonal deduction. British Journal for the Philosophy of Science 34 (March):13-36.   (Cited by 13 | Annotation | Google | More links)
Slezak, Peter (1982). Godel's theorem and the mind. British Journal for the Philosophy of Science 33 (March):41-52.   (Cited by 13 | Annotation | Google | More links)
Slezak, Peter (1984). Minds, machines and self-reference. Dialectica 38:17-34.   (Cited by 1 | Google | More links)
Sloman, Aaron (1986). The emperor's real mind. In A.G. Cohn & J.R. Thomas (eds.), Artificial Intelligence and Its Applications. John Wiley and Sons.   (Google)
Smart, J. J. C. (1961). Godel's theorem, church's theorem, and mechanism. Synthese 13 (June):105-10.   (Annotation | Google)
Stone, Tony & Davies, Martin (1998). Folk psychology and mental simulation. Royal Institute of Philosophy Supplement 43:53-82.   (Google | More links)
Abstract: This paper is about the contemporary debate concerning folk psychology – the debate between the proponents of the theory theory of folk psychology and the friends of the simulation alternative.1 At the outset, we need to ask: What should we mean by this term ‘folk psychology’?
Tymoczko, Thomas (1991). Why I am not a Turing machine: Godel's theorem and the philosophy of mind. In Jay L. Garfield (ed.), Foundations of Cognitive Science. Paragon House.   (Annotation | Google)
Wang, H. (1974). From Mathematics to Philosophy. London.   (Cited by 125 | Google)
Webb, Judson (1968). Metamathematics and the philosophy of mind. Philosophy of Science 35 (June):156-78.   (Cited by 6 | Google | More links)
Webb, Judson (1980). Mechanism, Mentalism and Metamathematics. Kluwer.   (Cited by 45 | Google)
Whitely, C. (1962). Minds, machines and Godel: A reply to mr Lucas. Philosophy 37 (January):61-62.   (Annotation | Google)
Yu, Q. (1992). Consistency, mechanicalness, and the logic of the mind. Synthese 90 (1):145-79.   (Cited by 4 | Google | More links)
Abstract:   G. Priest's anti-consistency argument (Priest 1979, 1984, 1987) and J. R. Lucas's anti-mechanist argument (Lucas 1961, 1968, 1970, 1984) both appeal to Gödel incompleteness. By way of refuting them, this paper defends the thesis of quartet compatibility, viz., that the logic of the mind can simultaneously be Gödel incomplete, consistent, mechanical, and recursion complete (capable of all means of recursion). A representational approach is pursued, which owes its origin to works by, among others, J. Myhill (1964), P. Benacerraf (1967), J. Webb (1980, 1983) and M. Arbib (1987). It is shown that the fallacy shared by the two arguments under discussion lies in misidentifying two systems, the one for which the Gödel sentence is constructable and to be proved, and the other in which the Gödel sentence in question is indeed provable. It follows that the logic of the mind can surpass its own Gödelian limitation not by being inconsistent or non-mechanistic, but by being capable of representing stronger systems in itself; and so can a proper machine. The concepts of representational provability, representational maximality, formal system capacity, etc., are discussed