The Problems of Consciousness

The Problems of Consciousness

David J. Chalmers

Department of Philosophy
University of Arizona
Tucson, AZ 85721

chalmers@arizona.edu

*[[Published in (H. Jasper, L. Descarries, V. Castellucci, & S. Rossignol, eds) Consciousness: At the Frontiers of Neuroscience (Advances in Neurology, Vol. 77). Lippincott-Raven Press, 1998.

This paper is an edited transcription of a talk at the 1997 Montreal symposium on "Consciousness at the Frontiers of Neuroscience". There's not much here that isn't said elsewhere, e.g. in "Facing Up to the Problem of Consciousness" and "How Can We Construct a Science of Consciousness?"]]

1 Type-I and type-II phenomena

Our chair introduced this session by distinguishing "top-down" and "bottom-up" approaches to consciousness.[*] The work of Herbert Jasper illustrates the value of a bottom-up approach, but I suppose that I will be presenting a top-down view here. This sort of approach has its limits, but I think it's useful at least at the start of this sort of discussion. That way we can see just what the problems are and what the lay of the land is, before we get into the bottom-up approach -- the correlated discharges and the afferent and efferent connections -- to make the detailed progress which comes later. So I'll start then by presenting an overview of the problems of consciousness from the lofty perspective of the philosopher. I'll also concentrate on the question of how we might be able to construct a stable, self-sustaining science of consciousness.

I was asked to speak on the topic of "definitions and explanations", because I gather philosophers are supposed to be very good at definitions. Sadly, one thing you find out if you do some philosophy is that philosophers aren't much better at definitions than anybody else. In fact, the history of philosophy shows that defining anything is a bit of a hopeless task. Definitions are the kind of things which come at the end of the game, not at the beginning of the game. The more important something is, the harder it is to define. So instead of defining a concept like consciousness, it may be more interesting to point at the various phenomena which are at play in the vicinity. After all, any word in English is very ambiguous, and the word "consciousness" is no exception. When different people talk about consciousness, they're typically talking about very different things. So if someone asks "how do you explain consciousness?", there's no single answer, because "consciousness" doesn't refer just to one phenomenon. Once we start to separate out individual phenomena, specific phenomena, we might be able to give more unified answers to those questions.

So, what are some of the phenomena people talk about when they're talking about consciousness? To start with, in a very liberal use of the term, a central phenomenon of consciousness is sensory discrimination. Speaking very loosely, one might sat that an organism is conscious of an object in its environments when it can discriminate information about that object in its environment and do something with it. So in this sense, you'll have consciousness in a sea slug. On the other hand, some of the more interesting phenomena of consciousness come in when one proceeds "further in" to the cognitive system. For example, consider the integration of information inside a nervous system. How does all that information from different modalities and different areas get integrated inside a brain? This is frequently thought of as one of the central problems of consciousness. Another central phenomenon is the accessibility of information to a subject. How is it that a cognitive system can have access to information about the world and about itself and use that information in the control of behavior? This leads directly into another crucial phenomenon, which is that of verbal report. How do we have access to information such that we can report on that information and talk about it? This yields perhaps the most popular operational definition for consciousness in the sciences, perhaps because it's so easy to pin down: A human being is said to be conscious of some information precisely when that information is verbally reportable. There are also phenomena of self-monitoring: How can a cognitive system monitor its own states, by some kind of a feedback loop for example? And finally there are problems in the voluntary control of behavior. When I consciously move my arm, this is under my control. How is this kind of control of behavior managed?

Now these phenomena are all somehow related to each other; they are all in some sense problems of consciousness. On the other hand, there's a sense in which none of these are the core phenomenon of consciousness. When people say that consciousness is the great scientific challenge of our time, or the last ill-understood phenomenon, the phenomenon at issue is not usually integration and discrimination and access and verbal report, but subjective experience. We, as conscious beings, have subjective experience of our minds and of the world. It feels like something to be a conscious being. States of subjective experience are states that feel like something. When I look out at the audience, I perceive the audience visually. I have visual sensations, visual experiences of colors, of shapes, of objects, of individual people, and this feels like something from the first person point of view. When I listen, I consciously experience a little hum, a few coughs, some sounds in the background, and something similar goes on in all the sensory modalities. I have experiences of certain mental images, the feeling of certain kinds of emotions, and the stream of conscious thought. These are what we might call the paradigmatically "first-person" phenomena and in some ways these are the phenomena which are at the core of consciousness.

Some people argue that we should reserve the word "consciousness" for these phenomena of subjective experience. Personally, I think there's not much point in getting into arguments about a word. So one can, if one likes, use the word "consciousness" for all these phenomena, as long as one makes some sort of distinction between them. What's interesting are the different kinds of problems these phenomena pose. Because I argue that there are certain qualitative distinctions between the phenomena in these categories. I don't want to prejudice anybody about these problems, but nevertheless sometimes it's useful to divide them up into different categories. So I'll call them, relatively neutrally, the "type I" phenomena and the "type II" phenomena. The type I phenomena are the phenomena of discrimination, integration, report, and the like, and the type-II phenomena are those of subjective experience.

2 The explanatory challenge

The problems in these two categories pose different kinds of challenges. The type-I phenomena all have something in common. They are all defined in terms of cognitive and behavioral functions. To explain these phenomena, all we need to do is explain how some system in the brain performs some functional role in the control of cognition and behavior. With reportability, for example, we need to explain the function of producing verbal reports. So if someone says "explain consciousness," in the sense of "explain reportability", what you need to do is to explain how that function is performed. And when it comes to explaining the performance of functions, we have a nice, well-developed methodology for doing this. What you do is you specify a mechanism, a mechanism which can perform the function. And within neuroscience one will typically look at a neural mechanism; within maybe cognitive science and artificial intelligence one will look at some kind of computational mechanism. Frequently one will try to do both, one will give a neural and a computational story. So, for example, to explain sensory discrimination or aspects of integration, one will give a neural and a computational mechanism which can perform those functions. And in these case, for the type I phenomena, once you've explained how the functions are performed, you've explained everything. You've explained what's at issue.

And indeed, I think if you look at reductive explanation throughout science, you see this pattern again and again. By reductive explanation, I mean the kind of explanation where one explains a high-level phenomenon wholly in terms of certain lower-level phenomena. The kinds of problems to which reductive explanation is applied very typically problems about the performance of functions.

Take the problem of life. When it comes to explaining life, what do we need to explain? We need to explain phenomena such as reproduction, adaptation, metabolism, and these are all functional phenomena. One gives a story about mechanisms which perform the functions, and this explains them. Something similar goes for genetics, where one explains the transmission of hereditary information; for learning, where one explains the adaptation of behavior in response to certain kinds of environmental stimulation; maybe even for phenomena such as light and heat, where one explains things like the transmission of visual information, the expansion of metals in response to certain kinds of stimulations, the flow of heat and so on. In every case what needs explaining are functions.

To explain the performance of a function, what does one do? One gives a mechanism. Take the problem of genetics. This is really the problem of explaining the function of transmission of hereditary characteristics from one generation to the next. How did this problem get solved? Watson and Crick came along and specified a mechanism - DNA - which can perform this function. One can tell a plausible about how this mechanism performs the function of transmitting information from one generation to the next. Once this story is appropriately elaborated and confirmed, one has solved the central problem of genetics. Things work something like this for most phenomena in science, and certainly those that fall in the type I or functional category.

What makes the type II phenomena different? Incidentally I've spoken so far as if there's just one type II phenomenon, subjective experience. This may be misleading. There are many different type II phenomena, as there are many different phenomena of subjective experiences: visual experience, auditory experience, emotional experience, imagistic experience, and so on. And we don't have any certainty at the start that all these different phenomena will be amenable to the same kind of explanation. But nevertheless, one can group them at the start into the category of type II or subjective phenomena. What's unusual about these phenomena is that they don't seem to be defined, on the face of it, in terms of the performance of functions. When it comes to explaining experience - the subjective experience of vision or of hearing, or the fact that we're conscious at all - we are not trying to explain how we respond or move, or how any function is performed. It's a different kind of problem, one for which functional explanation is not so obviously appropriate.

One way to put this distinction is that for the type-I phenomena, explaining certain functions - how the brain performs a role - suffices to explain the phenomena. But for the type-II phenomena, there is a further question in the vicinity. Even once one has explained all the functions in the vicinity - discrimination, integration, access, report and so on - there remains, potentially, something else to explain. We still need to explain how and the performance of these functions should be conscious, or accompanied by subjective experience. And this is a further question to which an answer to the first question does not guarantee us an answer. Now it may happen that in the course of answering the first question we are led to the crucial insights which will lead us to answering the second question. That can't be ruled out. But the distinction between the type I and the type II phenomena is that for the type I phenomena we know that explaining the functions suffices: the functions were all we were concerned with in the first place. For the type II phenomena, we don't know that. So if there is a link, it will be a more indirect link. The kind of functional explanation which worked so well elsewhere doesn't apply so directly here.

The basic problem of subjective experience can be put this way The standard methods, which have been very successful in neuroscience and cognitive science, have largely been developed to explain structure and to explain function. We specify the fine grained structure and dynamics of a neural or computational process, and this enables us to explain some sort of higher-level structure and function, whether it is gross behavior or some more complex internal process such as discrimination, integration, and memory. The structure and function of neurons gives you a story about the structure and dynamics of perception, for example. This works terrifically well for most problems in neuroscience and cognitive science - the type-I problems. But the problem of experience is not just a problem of explaining structure and function, so the standard reductive methods as they stand, are incomplete.

A little bit of structure and dynamics can get you a long way in science: it gets you to a lot more structure and a lot more dynamics. But that's all it gets you to. For most problems, where all we need to be explain is structure and function, that's enough. But when we have something that needs to be explained which is not initially characterized in terms of function, then there is a potential gap here. It's what some philosophers have called an "explanatory gap" in theories of subjective experience.

Take your favorite neural or computational theory of the processes underlying consciousness. A hackneyed example is one of the theories involving synchronized oscillations in the cortex which come to bind certain kinds of information together. What does this potentially explain? Potentially, such a theory might explain all kinds of interesting aspects of binding, of integration, of storage of information in working memory, maybe even of how this gets used in the control of behavior. These are important structural and functional questions. But when it comes to the question of why it is that this should somehow support subjective experience, the theory is silent. There is an explanatory gap between the story you tell about the oscillations and the manifest fact that this somehow results in subjective experience. On the face of it, the theory, the structural and functional theory is logically compatible with the absence of experience, and therefore nothing in the theory alone tells you how you get to subjective experience. That is the basic problem of consciousness, and there are many different responses.

3 Constructing a Science of Consciousness

Rather than attack that problem head on right now, I'm going to step back a little bit and ask "How is it that we can construct a science of consciousness?". That is, what will be the shape of a field which simultaneously is a science and takes consciousness seriously? I think this is a particularly pressing question now, with consciousness subject to such waxing and waning of interest. It was a very popular subject in the late 19th Century, then interest in consciousness waned for many decades. It's undergone periodic resurgences. It's in the middle of such a resurgence now. The question is, is it going to be possible to have a stable and self-sustaining science of consciousness which isn't subject to these vicissitudes in the ways that we've seen in the past? It's hard to say, but I hope so. So the very least I think we can look at sort of the shape of what a science of consciousness might look like and we might look at what some of its projects might be. Of course there will be more than one project: a science of consciousness will have many different paths. But we can ask what the various components will be. Again, I will look at these projects always while keeping one eye on the central, the core phenomena of subjective experience.

Project 1: Explain the functions

The first project is that of explaining the type-I phenomena, the functions I discussed at the beginning: discrimination, integration, access, self-monitoring, report, and so on. In a way this is the most straightforward project. We've seen that these are clearly amenable to the relatively standard methods of reductive explanation. You give a neural or computational story about how these functions are performed, and that explains the phenomena at issue. I expect that most of the work in a meeting like this one is going to be in this kind of paradigm. There are all kinds of examples of this sort of work around the place. There are the various synchronized oscillation models, for example, of the integration of information in the brain. At the cognitive level there's something like Baars's global workspace model of information integration and dissemination. There are various re-entrant circuit models of perception, memory, integration, and self-monitoring.

As with approaches to consciousness, this approach has strengths and limitations. The strengths are that here, we are dealing with phenomena that are wholly intersubjective. In studying discrimination, integration, access, and report, things are straightforwardly objective and measurable and none of the difficult epistemological problems of subjectivity come to the play. And as functional phenomena, they are amenable to the same kind of traditional reductive methods which seem to work elsewhere in science. Because of this, I expect that these phenomena will go on to be the meat-and-potatoes of the field. This is a good thing, because it is here that there is the most straightforward possibility of progress.

The limitation, of course is that this approach doesn't directly address the problems of subjective experience. It's addressing a different set of phenomena. This isn't to say that one can't make a bridge to subjective experience, but to make that bridge, one has to do something else over and above explaining the functions. So in dealing with the meat and potatoes, it will be important to keep an eye open in the background on how we might build that bridge. That is, we need to consider how our work on these type-I phenomena is relevant to the type-II phenomena of experience. I'm not saying the work is irrelevant, but it's a question that has to be addressed directly rather than ignored.

Project 2: Isolate the neural correlates of consciousness

The second project I'll look at is that of isolating the neural basis of consciousness, or what's sometimes called the "neural correlate of consciousness". In this project, we aim to isolate the neural and cognitive systems such that information is represented in those systems when and only when it's represented in consciousness. One can at least hypothesize that there's some neural locus such that for information to make it into consciousness it has to be represented somehow in that neural locus. Of course it may not work out this way. It may be that for information to make it into consciousness it needs just to make it into one of many different areas, not localizable, perhaps not even functionally localizable. There may well be many different neural correlates of consciousness in different modalities, at different levels of description. But in any case this provides an initial question to shape one's approach.

A large number of such proposals have been made already. In another paper of mine (Chalmers 1998) I've made a list of a number of them. I called this the "Neural correlate zoo", analogous to what particle physicists sometimes call the "particle zoo", where they have 237 elementary particles, or some such. It can sometimes seem that 237 different neural correlates of consciousness have been put forward, ever since Descartes got the whole thing started with his talk about the pineal gland as the seat of the mind. The locus classicus in the contemporary discussion of these ideas is probably the suggestion by Wilder Penfield and Herbert Jasper (1954) that the intralaminar nucleus in the thalamus is the basis of consciousness. More recently Crick and Koch (1990) have proposed that 40-hertz oscillations in the cortex are crucial; Milner and Goodale (1995) have suggested that that the ventral pathways in the inferior temporal cortex are the basis of visual consciousness; and Bogen (1995) has revived Penfield's and Jasper's ideas about the intralaminar nucleus. There has even been speculation about the role of quantum effects in microtubules, from Hameroff and Penrose (1996).

It may be that many of the proposals are compatible. They may be dealing with neural correlates of different aspects of consciousness, or at different points in the processing pathway, or at different levels of description. On the other hand, it's likely that many of these proposals are simply wrong. Indeed, it may well be that at this stage of inquiry all of them are wrong. But we have some interesting ideas to go forward with.

This work is very useful in sneaking up on the problem of subjective experience. We're starting from the structural component that we understand well and trying to build a bridge, and the first element in that bridge is isolating that most relevant physical systems. Many or most people doing this work are indeed concerned with the neural correlates of subjective experience itself. In Logothetis's (1996) work on binocular rivalry in monkey's, for example, one finds certainly neurons, maybe in V5/MT or in IT, which correlate with what the monkey seems to be experiencing. Different stimuli are presented to both eyes, and the monkey responds as if it sees just one of them, and presumably it just experiences one of them. Then certain neural systems are found to correlate strongly with the stimulus that the monkey perceives, rather than responds to, rather than simply the stimuli that are presented. So this is helpful in providing the first part of a bridge to consciousness.

Of course there are certain limitations here. First, this kind of work always depends on certain pre-experimental assumptions to get off the ground. When picking out a certain kind of neural process as a correlate of consciousness, one needs some kind of criterion for ascribing consciousness to a system in the first place. Some kind of functional criterion: report, behavior, and so on. And those assumptions are substantial and pre-experimental. Some of them are very straightforward. For example, where there is verbal report, there is consciousness. When someone says they're conscious, they are conscious. Now that's an assumption. It's not guaranteed to be true, but as assumptions go in science, it's a fairly safe one. Once you move away from language-using systems, it gets much more difficult, of course. Much of this work takes place in monkeys who can't use verbal report, so you have to use more indirect criteria such as criteria of deliberate and controlled behavior in a number of different modalities. When you can say there is the use of information in deliberate and controlled behavior, we'll say there is indeed subjective experience underlying it. Perhaps this isn't obvious, but in any case all I want to point out here is that some assumptions - a little bit of philosophy - is needed to get this kind of work off the ground. This introduces an element of danger into the process, but it seems to be an element that people can live with.

The other limitation, of course, is that working on the neural basis of consciousness gives you something more akin to correlation rather than explanation, at least at the start. We find that when there is such and such a neural process, there is subjective experience, and one can maybe have a detailed system of correlations. Nothing here gives you a full explanatory story on its own. The central question in the background is, of course, how and why it is that this neural process should give you subjective experience. Just giving a correlational story isn't going to answer that question. This is not to knock this work; it's simply to point to the existence of a further important question in the background which we eventually need to answer.

Project 3: Explain the structure of consciousness

A third important component of a science of consciousness is that of accounting for the structural features of consciousness. Consciousness has many different aspects, and it has a very complex structure. My visual field has a geometry: there are relations between all sorts of experiences in my visual field. Color space has a complex 3-dimensional structure, and so on. These structural features of experience are particularly amenable to neuroscientific explanations. Relations between subjective experiences, similarities and differences between subjective experiences, relative intensities and durations and so on, all see, to be objectively represented. This can be seen by noting that all this sort of information is straightforwardly reportable, so it must be represented within the cognitive system. This leads us to the possibility of at least an indirect account of the structural features of consciousness, by giving an account of corresponding structural features in the information which is made available for access and report inside the brain. So one can tell a reductive story about the 3-dimensional structure of neurally encoded color space which provides an indirect explanation of the 3-dimensional structure of experienced color space, of experienced color space. The same goes for the geometry of the phenomenal visual field. And indeed I think if one looked at much of what goes on in the field of psychophysics, it has precisely this form: trying to account for structural features of conscious experience indirectly in terms of structural properties of stimuli and of underlying processing.

This project allows one to account for many specific structural features of consciousness, many specific structural features of consciousness, at least once we grant that consciousness exists in the first place. It also gives a more systematic link between consciousness and underlying processes. It has certain limitations, though. First, it doesn't get us immediately to the non-structural features of consciousness, the quality, for example, of redness as opposed to the quality of blueness. It will tell us about their similarities and differences, but it won't tell us why one is one way rather than the other way, or why they have their specific intrinsic natures. Second, it doesn't explain why consciousness exists in the first place. This kind of work takes it for granted that consciousness exists, and goes on to explain some of its features. But this kind of work relies on a kind of high level bridging principle to get off the ground. We make a postulate: similarities and differences in conscious experience correspond to similarities and differences in underlying information processing. Using that postulate which takes consciousness for granted, then the work gets off the ground. So one shouldn't, I think, over-read this kind of work as providing a complete explanation of consciousness in terms of brain processes. Nevertheless it's very useful.

Project 4: Bridge the gap

Finally, let me say a word about the fourth and final project, which is my favorite. This project focuses on the how and why of subjective experience itself. It's the project of explaining the connection between physical processes in the brain and subjective experience: how is it that these processes yield consciousness at all? What are the basic principles that explain why the connection holds, and that account for experiences' specific nature? This may be the most difficult question when it comes to consciousness, and you may say "Well, this is one which we want to put off a little bit. It's not something which everybody needs to be working on right now, and it may take us fifty, a hundred, a hundred-and-fifty years." Nevertheless, I think one can look at the problem now and at least make certain inferences about the kind of work that is going to be required to get at this problem.

One thing that we know right now is that certain standard methods, in and of themselves, don't provide a solution. Standard reductive explanation, in terms of structure and function, will explain to you more structure and function, but at the end of the day we're going to be left with the question of how this functioning supports subjective experience. At the very least, one will at the very least have to either transfigure the problem of consciousness into a different problem, to make it addressable, or expand the explanatory methods. I will look at the option which involves expanding the explanatory methods.

Some people suggest that to get subjective experience into the picture, one needs some extra physical ingredient: maybe more physics, quantum mechanics, chaos theory. I think all these methods, in the end suffer from some very similar problems. They're well suited toward explaining structure and function, but they still only get you to more complex structure and dynamics. So we're still in the type I domain, whether it comes to a quantum mechanical explanation of decision making or a chaotic explanation of complex behavior. So it seems that more physics and more processing isn't enough to bridge the gap.

Instead, I think we need to do supplement a structural/functional account with what we might call "bridging principles", to cross the gap to conscious experience. First, one tells a story about processing. Nothing in that story on its own entails the existence of conscious experience, but we supplement this story about processing with some bridging principles. These bridging principles tell us that where we have certain kinds of processing, or certain kinds of physical properties, one will have certain kinds of conscious experience. One will then have a theory: an account of processing plus bridging principles, which will jointly entail and explain the existence of consciousness. The processing part is the part we all know about and are familiar with already. The crucial question is the nature of the bridge.

If what I've said so far about structure and function is right, these bridging principles won't themselves be entailed by the story about underlying processing. Nothing in the story about structure and function will tell you why these bridging principles are true. So we'll have to take some element of this bridge as basic elements in our theory, not to be further explained. And in that sense, those principles will be analogous to basic laws in physics. We're used to the idea that in science one needs to take something for granted, a starting point on which everything else can be built. But one wants to make what one takes for granted as simple as possible. If structure and function doesn't explain experience on its own, we want to add the minimal component to our theories which will bring subjective experience in.

This may sound fine in theory, but what is the methodology for discovering these principles? How can we arrive at our final and fundamental theory of consciousness? Obviously it won't happen any time soon, but I think there is a methodology here. We need to study and systematize the regularities between processing and phenomenology. We'll try to build up systematic connections between objective and subjective properties, and then we will try to explain these regularities by a set of underlying principles. These principles will start out quite complex. At the very beginning we will have initial principles connecting reportability and consciousness, for example, or complex behavior and consciousness, as a kind of guide leading us to something more specific. We can then move to specific empirical principles connecting specific neural systems with conscious experience. We'll have some sort of non-reductive principle saying that when you have certain kinds of activity, for example, in the intralaminar nuclei, you have certain kinds of conscious experience. These principles will be useful, but they will still be quite complex. One wouldn't want to invoke a connection between the intralaminar nuclei and consciousness as a primitive element in one's scientific theory: one wants to explain why this connection holds in terms of simpler and more universal principles, plus some details of boundary conditions and local conditions and the like. This reflects a methodology found throughout science, where we explain the complex in terms of the simple.

So we want to explain the emergence of complex consciousness from complex systems in a brain in terms of certain simpler bridging principles. And we want to make the primitive component in our theory, the component which goes unexplained, as small as possible. Now if what I've said is right, one can't reduce that primitive component to zero. To reduce that component to zero would be to say that the story about experience could be logically deduced from the structural and functional story about the brain, and that's a pipe-dream. Nevertheless, one wants to add the minimal component which is going to cross the bridge.

There are a couple of ways in which a theory of consciousness will be different from theories in other domains. The first is that in other domains, no primitive bridging principles are usually required. For example, in explaining life, an account of interactions between the various functional components of a living system is enough to eventually explain reproduction, metabolism, and all the phenomena of life. One doesn't need any further primitive bridges to explain why these functions yield life. But for consciousness, we need primitive vertical bridges in the theoretical structure. Philosophers can argue back and forth about the nature of these bridges. Some philosophers say "we'll call them fundamental laws". Other philosophers say "we'll call them identities" (so that a certain brain process is "identical" to a certain kind of experience). That's a philosophical argument which doesn't matter much here. The important point is that these principles are going to be explanatorily primitive at some point, to get the theory off the ground.

A second important point is that a science of consciousness will be in a deep sense a first-person science. In this domain, you can't get away from first-person phenomenology. If you throw away first-person phenomenology about subjective experience, you've thrown away the data. The initial data for postulating subjective experience come from the first-person point of view: without the first-person data, one has a science of consciousness without consciousness. So some sort of phenomenological study is crucial to a theory of subjective experience, if only to get a theory off the ground. This sometimes works by a simple bootstrapping process. One does enough phenomenology to know that when I'm conscious of something I can report it and I can talk about it. One then uses that simple principle to bootstrap oneself to other cases. Other people are talking about consciousness, one says "They're making the reports, so they're conscious" and we bootstrap to data about their consciousness in that way. Other times, phenomenology may play a more detailed role, as in careful introspective studies of one's experience and how it correlates with underlying processes. Either way, the role of phenomenology in a theory of consciousness is ineliminable. One can't have a science of subjectivity without bringing in the subject.

You might worry that this leads us to an uncomfortable loss of intersubjectivity here. There's something to this: the privacy of subjective experience data means that the data aren't as easily and universally accessible as in other domains. But this is not to say the theory will be ungrounded. A theory of subjective experience will remain grounded both in the third-person data and in first-person phenomenological data. You need both sorts of data to get a theory of subjective experience off the ground. We all have access to this sort of data, and our theories will be evaluated according to how well they fit the data. Once we have the simplest set of principles that predicts the data about experience from facts about processing, then I think we will have good reasons to accept those principles as true.

Of course, for now this is a long way off. People can speculate on what these principles might be, but it may be five or fifty or five hundred years until we have a really good theory. So it's not something I'm recommending that everybody concentrate on now. I have speculated elsewhere (Chalmers 1996) about the shape of a theory, but at this point it's only speculation. For a detailed bridge we need more detailed research. Not everyone will be working on this bridge directly, while they're working on the meat-and-potatoes questions. But while they are working on those questions, they can at least keep an eye upon the phenomena of experience, and note any systematic connections between processing and experience. In the end this sort of thing - careful experiment, phenomenology, connection, systematization, and simplification - may lead us to the universal principles in virtue of which processing supports experience, and thus to the core of a theory of consciousness.

References

Bogen, J.E. 1995. On the neurophysiology of consciousness, part I. Consciousness and Cognition 4:52-62.

Chalmers, D.J. 1996. The Conscious Mind: In Search of a Fundamental Theory. Oxford University Press.

Chalmers, D.J. 1998. On the search for the neural correlate of consciousness. In (S. Hameroff, A. Kaszniak, and A. Scott, eds) Toward a Science of Consciousness II. MIT Press.

Hameroff, S.R. and Penrose, R. 1996. Conscious events as orchestrated space-time selections. Journal of Consciousness Studies 3:36-53. Reprinted in (J. Shear, ed) Explaining Consciousness: The Hard Problem. MIT Press.

Milner, A.D. and Goodale, M.A. 1995. The Visual Brain in Action. Oxford University Press.

Penfield, W. and Jasper, H.H. 1954. Epilepsy and the Functional Anatomy of the Human Brain. Little, Brown.