Commentary on Dan Lloyd, "What is it Like to Be a Net?"
The project that Dan Lloyd has undertaken is admirable and audacious. He has tried to boil down the substrate of information-processing that underlies conscious experience to some very simple elements, in order to gain a better understanding of the phenomenon. Some people will suspect that by considering a model as simple as a connectionist network, Dan has thrown away everything that is interesting about consciousness. Perhaps there is something to that complaint, but I will take a different tack. It seems to me that if we apply his own reasoning, we can see that Dan has not taken things far enough. When we have boiled things down to a system as simple as a connectionist network, it seems faint-hearted to stop there, and perhaps a little arbitrary as well. So I will take things further, and ask what seems to be the really interesting question in the vicinity: what is it like to be a thermostat?
A quick glance at diagrams of the models shows that there are a lot of similarities between connectionist networks and thermostats [diagrams]. Both take an input, perform a quick and easy nonlinear transformation on it, and produce an output. Of course, there are a few extra units and connections in the connectionist network, but one wonders how relevant this whiff of complexity will ultimately be to the arguments about consciousness. Someone once said that there are no reasonable numbers between one and infinity. Once a model with five units, say, is to be regarded as a model of consciousness, surely a model with one unit will also yield some insight.
Indeed, if we apply Lloyd's own reasoning, the thermostat does very well as a model of consciousness. For a start: like NETTALK, it can be provided with an interpretation that also construes it as a model of the world. Specifically, it can be naturally construed as representing the world's temperature. Furthermore, it satisfies Lloyd's "coincidence condition", requiring a structural similarity between the world for the system and the world for us. Just as NETtalk captures salient distinctions in the world of English phonetics, the thermostat captures salient distinctions in the world of temperature. Indeed, it captures the single most salient such distinction, the distinction between hot and cold.
Now, to be sure, the thermostat does not use distributed representation, which Lloyd makes much of, and consequently it does not support the relevant superposition of information. To get around this disanalogy, we might relax our standards of simplicity just a touch and move to the "superthermostat", also known as the two-unit connectionist network [diagram]. It seems to me that we could achieve the kind of distribution and superposition that Lloyd is interested in with this network, but I am inclined to suppose that we need not clutter our model with such needless complexity. After all, does it not seem that this rich superposition of information is an inessential element of consciousness? To be sure, in the glory of human consciousness we find this richness, as Lloyd illustrates with the manifold aspects of a Tudor cottage, but the aim of the current exercise is to simplify and abstract.
Surely, somewhere on the continuum between systems with rich and complex conscious experience and systems with no experience at all, there are systems with simple conscious experience. A model with superposition of information seems to be more than we need - why, after all, should not the simplest cases involve information experienced discretely? We might imagine a traumatized creature that is blind to every other distinction to which humans are normally sensitive, but which can still experience hot and cold. Despite the lack of superposition, this experience would still qualify as a phenomenology. So, just as a connectionist network qualifies as a model of superimposed phenomenology, the thermostat seems to qualify as a model of this basic kind of phenomenology. Indeed, if we accept Lloyd's approach, it seems to follow that in this model we have stripped down the substrate of phenomenology to its bare essentials. (They say that Euclid alone has looked on beauty bare, but surely this comes close.)
Having come this far this quickly, perhaps it is time to sit back and get some perspective. A thermostat, or indeed a simple connectionist network, as a model of conscious experience? This is indeed very surprising. Either there is a deep insight somewhere within Lloyd's reasoning, or something has gone terribly wrong. I am inclined to think that there is an element of truth in both these diagnoses of Lloyd's counterintuitive claim.
First, the insight. What Lloyd's approach brings out is that when we try to isolate the kind of processing that is required for conscious experience, the requirements are remarkably hard to pin down, and a careful analysis does not throw up processing criteria that are more than minimal. What are some reasonable-seeming functional criteria for conscious experience? One traditional criterion is reportability, but this is far too strong to be an across-the-board requirement. It seems reasonable to suppose that dogs and cats have conscious experience, even in the absence of an ability to report. A weaker criterion is introspectability: perhaps for a content to be experienced a system needs to be thinking about the content, or at least able to think about the content. I am sympathetic, though, with Lloyd's remarks that this sort of thing seems to be more symptomatic of reflective consciousness than of primary consciousness. On the face of it, it seems plausible that we can experience the fringes of our visual field without thinking about those experiences; and it is not implausible that a dog, say, might have visual experiences but entirely lack the conceptual capacity to monitor those experiences at a higher-level. I don't think the higher-order thought view of consciousness leads to an infinite regress, as Lloyd suggests - on such a view, a first-order thought will be conscious if it is accompanied by a higher-order thought, but the higher-order thought need not itself be conscious, so the regress is terminated - but it nevertheless seems more appropriate as an account of reflective rather than primary consciousness. Of course the issue is complex, but strong intuitions suggest that a system could be experiencing while only thinking about the world, not about its own mental states.
Sometimes it is suggested that conscious experience is a consequence of sufficient complexity, but this answer simply slides over the problem. Complexity is often relevant to the existence of some high-level phenomenon, but what is relevant is never complexity tout court, but the role that this complexity plays in a system. Life requires complexity, because complexity is required for adaptation and reproduction. If complexity is required for consciousness, it will be in virtue of some further functional property that this complexity enables, and we are seeing that this functional property is hard to pin down.
Indeed, on reflection it is hard to see why the intuitions that lead us to ascribe conscious experience to dogs and cats should prevent us from ascribing it to mice, or to flies, or to simpler systems. Such systems may lack such frills as language, a rich conceptual system, the ability to introspect, and perhaps even a concept of self, but why should experience require any of these? When we look at the kind of processes that give rise to experience in humans, such as the processes underlying color vision, what seems most essential is that the processes make certain discriminations, and make the relevant information available to the overall system in the control of behavior. This is a very basic sort of information processing, and is something that even a very simple system might share. It is this sort of reasoning that makes it seems just possible, after all, that there might be something it is like to be a net, or even a thermostat.
I confess that I was a little disappointed, though, after Dan's bold title, to see his nerve failing at the crucial juncture. Almost as an aside, he concedes that in fact NETTALK lacks experience, and that the answer to his title question is therefore "nothing". NETTALK, then, is not an instantiation of conscious experience; it is only a model of it.
This claim is weaker on the surface, but in fact I find it even harder to believe that NETTALK is a model of experience than that NETTALK has experience, counterintuitive though that second claim may be. For a model carries a particular burden: it must explain. And this leads us to what is perhaps the central worry about Lloyd's approach. On the face of it, this approach is put forward as a way of dealing with Nagel's worries about consciousness, where the central mystery is: why is there something it is like to be us at all? There is a huge prima facie mystery about how any sort of physical system could possess conscious experience. Lloyd holds out the promise that connectionist models might shed light on this question, but at the end of the day the models seem to leave the key explanatory question unanswered. Even if we were to go out on a limb and suppose that these simple systems are conscious, the question of explanation would still remain untouched.
What is it that these models might explain? On the face of it, they hold out the promise of explaining our abilities to make certain distinctions, and to exploit those distinctions in the control of behavior. We might ultimately see how the formation of a sophisticated world-model through information-processing enables a repertoire of actions that reflect the sophistication of that model. But where, in these models, is an explanation of experience? The problem in explaining experience is the apparent gulf between the brain and the quality of experience itself. A model might be expected to make this link more intelligible, but these models leave it as wide as ever. Why should this sort of processing be responsible for experience? There is no answer to be found my examining the models, or indeed within Lloyd's discussion. These are only models of processes, and they leave the gulf between process and experience as wide as ever.
Perhaps the best way to regard these models is as follows. We take the existence of experience for granted, and note that at least in familiar cases, there is a remarkable coherence between the structural properties of our experience and certain structural properties of our cognitive processes: the distinctions within phenomenology seem to parallel the distinctions made by our perceptual system, and so on. If we take this principle for granted, we can perhaps "model" the structure of consciousness indirectly by modeling the structure of our cognitive processing: so when Lloyd finds a certain superpositional structure in a network, this suggests itself as a substrate for the superpositional structure of our experience. This is a worthy explanatory achievement, even though it leaves the key question unanswered: why is there experience at all? These "models" take the existence of experience as a brute fact. Indeed, to gain their explanatory purchase, they must assume wholesale some such principle of structural coherence between experience and processing. But surely, these assumptions are precisely what we want a theory of experience to ultimately explain. Perhaps the existence of experience and these principles of coherence will ultimately need to be taken as basic; but if so, what we will be left with is irreducibility and perhaps even a kind of dualism, rather than the kind of reductive explanation that Lloyd is searching for.
The value of these models, I think, is that they reduce the substrate of processing that underlies conscious experience to its most basic core. Once we get to that core, we find the explanatory gulf remains as wide as ever. Some will be tempted to respond by increasing the complexity of the models, but that ultimately misses the point. The problem with those models is not their simplicity - rather, it is the simplicity of the models that brings out the problems with any models of this kind. At the end of the day, a more complex model of processing will give us just more of the same. A processing model may yield a terrific explanation of functions, abilities, and capacities, but to explain experience, it is simply the wrong sort of thing. The moral that I draw is that for a theory of consciousness, we must look elsewhere. But that is a topic for another day.