On "Consciousness and the Philosophers"

David J. Chalmers

John Searle's review of my book The Conscious Mind appeared in the March 6, 1997 edition of the New York Review of Books. I replied in a letter printed in their May 15, 1997 edition, and Searle's response appeared simultaneously. I set up this web page so that interested people can see my reply to Searle in turn, and to give access to other relevant materials.


John Searle's reply to my letter continues the odd combination of mistakes, misrepresentations, and unargued gut reactions found in his original review. I will go through his response and address all his points, major and minor. I will also include a few comments on the original review which could not be included in the letter for reasons of space.

(1) Searle says

To clarify: I don't say that mental terms are ambiguous between consciousness and "nonconscious functional processes". I say that many such terms have two interpretations (not "completely independent" interpretations), one of which requires consciousness and one of which doesn't. For example, in the strong sense, "perception" has to be conscious; in the weak sense, it does not. So "subliminal perception" will count as "perception" in the weak but not the strong sense. I think this is just common sense. Note that the weak sense doesn't require that a process be "nonconscious"; rather, it is neutral on the question of consciousness. Subliminal perception and conscious perception count equally as perception in the weak sense; what matters isn't consciousness or its absence, but the way a state functions. So Searle is wrong to suggest that this distinction implies that there is a nonconscious "pain" correlated with every conscious "pain".

(2) On the question of explanatory irrelevance of consciousness to physical actions. It is true that I bite this bullet, in a certain sense. But the view can be made to seem stranger than it is. So I should make some clarifications.

First, I do not say that consciousness is causally irrelevant to action. That is a question I am neutral on, and I think there are interesting views that give consciousness causal relevance without doing any damage to the scientific world view. (For example, by making consciousness correspond to the intrinsic aspect of physical states which physics characterizes only extrinsically.)

Second, I do not say that consciousness can never be used to explain action, so that explanations that involve consciousness are invalid. I simply say that invoking consciousness is not necessary to explain actions; there will always be a physical explanation that does not invoke or imply consciousness. A better phrase would have been "explanatorily superfluous", rather than "explanatorily irrelevant". Something can be superfluous and still be relevant.

Third, it isn't true that my view implies that "if you think you are reading because you consciously want to read, you are mistaken". On my view, it is very likely that you are reading because you want to read. It is just that fact that the wanting is consciously experienced is not required for the explanation to go through. Conscious wants can explain actions, and nonconscious wants can explain actions too. Similarly, you drink because you are thirsty, but the consciously experienced aspect of that thirst needn't be appealed to to explain your action.

Of course there remains something counterintuitive about the explanatory superfluity of consciousness. But I think one is forced to it by sound arguments. And in the book I defend it at length, arguing that it is merely counterintuitive and has no fatal flaws. Searle does not address this discussion.

(3) On panpsychism (the view that consciousness is everywhere): In Searle's review he said I was "explicitly committed" to this view. I corrected this and said I was agnostic about the view, although I do explore it at length. Searle apparently thinks I am lying about this:

Searle clearly does not grasp what is going on in this chapter of the book. First, with regard to the double-aspect theory of information: I am not committed even to this view, and indeed I say in the chapter (p. 310) that it is more likely wrong than right. But further, it simply isn't the case that the double-aspect view implies panpsychism. In the passage setting out the double-aspect view on p. 286, Searle has apparently missed the sentence two sentences after the one he quotes:

And a couple of sentences later:

In section 4, I go on to say that there are two quite different ways in which the view can be developed. One can go for a constrained version of the view, putting constraints on the kind of information which has a conscious aspect; or one can go for an unconstrained version, on which all information has a conscious aspect. Only the latter implies panpsychism. In the book I explore both these options and remain neutral between them.

Searle asks

Here he has simply lost the context. I spend some time exploring both the unconstrained and constrained views, considering how they might be developed and drawing out their implications. In the pages exploring the unconstrained view, I argue that panpsychism is not an unreasonable view, I defend it against various criticisms that might be raised, and I try to elaborate on consequences such as how we might conceive of the conscious experience of a thermostat, just as I elaborate on the constrained view after that. Apparently he mistakes this exploration of the consequences of one possible view for a statement of what I explicitly endorse. A quick glance at the context should rule out this interpretation: e.g., I say explicitly that I am considering both versions (p. 293), and I say explicitly that I consider the question of panpsychism open (p. 299).

I do say that I think there aren't any compelling arguments against panpsychism, and once again Searle helps to confirm this view. He tells us the only systems that we "know for a fact" are conscious are living systems with a certain sort of nervous systems. This is quite true, but it does not imply that simple systems are not conscious; it simply implies that we do not know for a fact that they are. So this might be grounds for agnosticism about panpsychism, but it is not grounds for rejection. He repeats his remarks about thermostats and such "not being remote candidates for consciousness", but this is again just assertion without argument. And he says that what goes on in a thermostat is quite different from the "specific features" in a brain: true, but to assume that these specific features are required for consciousness is again to beg the question.

(4) Searle apparently does not believe me when I say that the view that I characterize as "strangely beautiful" in the book is a view I rejected. He appears not to have read the relevant passages closely enough. The view so characterized is not, as Searle suggests, the view that "the universe consists of little bits of consciousness"; that is indeed a view I am agnostic about. Rather, it is the view on which the universe consists of "pure information" without any intrinsic nature (p. 303). This is a very different view, entirely incompatible with the other view, and I reject it on the grounds that some intrinsic nature is needed for the universe to be a universe at all.

(5) Searle's discussion of the argument for property dualism once again makes elementary mistakes. I argue that there is a logically possible world that is physically identical to ours without consciousness, and draw the conclusion that consciousness is a nonphysical feature of the world. Searle objects because this world would have different laws from ours.

Of course this is true, but it is also irrelevant. The point is that to derive the facts about consciousness, the physical facts and laws are not enough. You need to add in the bridging laws relating physical process and consciousness. Whereas in almost any other domains, you don't need any bridging laws: the physical facts and laws alone are enough to derive the position of pigs, the facts about digestion, the shape of rocks, the functioning of living processes, and so on. This is witnessed by the logical impossibility of a world physically identical to ours but in which rocks have different shape or in which pigs fly, and similarly for all the others. So Searle has simply evaded the point. Indeed, he has in effect conceded the point, by agreeing that a world physically identical to ours but without consciousness is logically possible. To be consistent, he too ought to be a property dualist.

Searle also suggests that I beg the question:

But nothing in the argument requires any such assumption. We need only stipulate that the world we are considering is a world identical in the facts and laws of microphysics, characterizing the exact distribution of particles, fields, forces, and so on, in space and time. We stipulate nothing one way or the other about consciousness, rocks, pigs, and the like. The point is that from these facts one can derive the facts about rocks, pigs, and the like but one can't derive the facts about consciousness, which is the relevant disanalogy. This argument implies that consciousness is not a physical feature of the world, but it does not assume it. (Except in the very weak sense in which the premises of any argument assume the conclusion: if the conclusion were false, one of the premises would have to be false!)

Of course I agree with Searle that if one builds in facts and laws about consciousness to the basic package of facts and laws from which one can derive everything else (facts about fields, particles, forces, etc), then one can derive the facts about consciousness. But that conclusion is trivial. Indeed, to include consciousness in this basic package is precisely to endorse the view that I hold. To avoid this conclusion, Searle has to derive the facts about consciousness from the set of physical facts and laws alone, and that is what he can't do.

(6) In discussing my arguments for nonreductive functionalism, Searle once again ignores the actual argumentation in the book. To recap: I argue that if consciousness could vary independently of functional organization, then there could be changes in consciousness (e.g. by replacing neurons by silicon) that a subject could never notice. The subject would certainly insist that "Nothing has changed", for example; this is a trivial consequence of the assumption that functional organization is preserved. Searle's position is that although the subject would not produce any "noticing behavior", they might nevertheless notice the change but be unable to express it in behavior. They might feel that they are paralyzed inside a body out of their control, for example.

Searle says that I merely "assume" this option is untenable, by assuming that there must be a match between conscious noticing and noticing behavior. But in fact I argue against this position at some length, around p. 258-59 of the book (as I noted in my letter). In particular, I argue that the view Searle wants to maintain requires a particularly bizarre and arbitrary law connecting physical states and belief contents. Searle does not address these arguments anywhere. Interested readers can find a version of my arguments for nonreductive functionalism in my online paper "Absent Qualia, Fading Qualia, Dancing Qualia".

(7) In Searle's original review, he uses patients with Guillain-Barre syndrome to argue against my nonreductive functionalism, by noting that their functional organization is "inappropriate" but that they are conscious anyway. I responded that this gets the logic wrong:

Searle replies:

Searle's point here is subtly different from his original version, but it is equally fallacious. Patients with Guillain-Barre syndrome certainly do not have the same functional organization as unconscious people. There is all sorts of complex functioning going on inside their heads that is not present in unconscious people. They have the same behavior, but that is all. Searle knows well that functional organization and behavior are quite different things. Indeed, the definition of functional organization that I give (p. 247) involves much more than mere behavior. And indeed the move from behaviorism to functionalism in the philosophy of mind was made in part for this reason: to allow more fine-grained distinctions than mere behavior could capture, and thus to handle cases such as those of perfect actors and paralytics. If Searle knows any elementary philosophy of mind, he knows this. But it renders his point entirely invalid.

(8) Searle finishes by saying that as candidates for explaining consciousness,

But this claim is quite false. Searle has made it a number of times, generally without any substantive supporting argument. I argue in Chapter 9 of the book, and in more detail in my papers "A Computational Foundation for the Study of Cognition" and "Does a Rock Implement Every Finite-State Automaton?" that the relevant notions can be made perfectly precise with objective criteria, and are therefore not at all observer-relative. If a given system has a given functional organization, implements a given computation, and therefore realizes certain information, it does so as a matter of objective fact. Searle does not address these arguments at all.

Searle does make one point that might look like supporting argument:

The latter part of this claim is true, but it no more implies that the notions are empty than the claim that everything has mass would imply that the notion of mass is empty. Everything has some organization, and carries some information, but the particular organizations and information realized by a given system will very from system to system. A theory of the consciousness associated with a given system will appeal not just to the fact that some organization or information is present - that would be empty - but to the specific organizational and informational properties in a given case. (By analogy, an account of the sun's powerful gravitational force appeals not just to the fact that the sun has mass - that would be empty - but to the specific large mass that the sun has.) These specific properties will only be present in a small number of cases, so there is no danger of vacuity. This issue is discussed further in the computation paper mentioned above.

(9) Finally, are my property dualism and my functionalism compatible? As Searle defines the positions in the first paragraph, they are obviously inconsistent. He defines functionalism as the view that mental states consist in physical functional states, which obviously contradicts the view that mental states are not physical states. My own variety of "functionalism" is very different from this. I don't say that mental states are functional states, merely that they are supported by functional states in the sense that any two functionally identical beings (in the actual world) will have the same conscious states. To make an analogy: almost any property dualist will hold that consciousness is somehow supported by brain states, in the sense that two creatures with identical brains will have identical experiences, but this is not to say that consciousness is the brain state. Likewise, my own view is not that consciousness is the functional state. I argue against that view at length, e.g. on the grounds that it remains logically possible that a being could have the functional state without consciousness. But it is nevertheless empirically impossible to have the physical/functional state without consciousness, on my view; there is a psychophysical law connecting the two.

In his review, Searle suggested that there is a deep tension between my property dualism and my functionalism. Once the nonreductive nature of my functionalism is noted, this claim can be seen to be misguided. Property dualism is entirely neutral on the question of what sort of physical system supports consciousness, and on the precise nature of the laws that connect the two. Presumably the laws will say that consciousness arises from physical systems in virtue of certain of their properties. I simply argue that these properties are organizational properties, so that consciousness might arise equally from neurons and from silicon, for example. There is no tension here. (One might find more tension in a property dualist who held that consciousness is specifically biological!) If one takes the trouble to distinguish the separate issues at play here, it becomes clear that the two views are complementary, not contradictory.


Go to: