From bayne@U.Arizona.EDU Wed Nov 3 13:54:05 1999 Return-Path: Received: from Arizona.EDU (Hopey.Telcom.Arizona.EDU [128.196.128.234]) by paradox.soc-sci.arizona.edu (8.9.3/8.9.3) with ESMTP id NAA15222 for ; Wed, 3 Nov 1999 13:54:05 -0700 Received: from DIRECTORY-DAEMON by Telcom.Arizona.EDU (PMDF V5.2-31 #39830) id <01JHWIUVW1PS9H00KF@Telcom.Arizona.EDU> for chalmers@paradox.soc-sci.arizona.edu (ORCPT rfc822;chalmers@arizona.edu); Wed, 3 Nov 1999 13:54:58 MST Received: from f1n7.u.arizona.edu by Telcom.Arizona.EDU (PMDF V5.2-31 #39830) with ESMTP id <01JHWIUVITKG9JIXFQ@Telcom.Arizona.EDU> for chalmers@Arizona.EDU; Wed, 03 Nov 1999 13:54:57 -0700 (MST) Received: from localhost (bayne@localhost) by f1n7.u.arizona.edu (8.8.8/8.8.8) with ESMTP id NAA42498; Wed, 03 Nov 1999 13:54:57 -0700 Date: Wed, 03 Nov 1999 13:54:57 -0700 (MST) From: Timothy J Bayne Subject: Nagel In-reply-to: Cc: David Chalmers , scicon@paradox.soc-sci.arizona.edu Message-id: MIME-version: 1.0 Content-type: TEXT/PLAIN; charset=US-ASCII Status: RO I had a discussion with someone this morning about Nagel and some interesting stuff seemed to come out of it. I haven't reread Nagel, so it may not be true to his intent. (1) Think of an objective phenomenology as something like an (allocentric) space-time map of the world. Every physical thing in the world is at some space-time distance from every other thing. Distinct beings can share this conception of the world. Similarly, a phenomenological dimensional space is a framework onto which all (possible?) phenomenal states can be mapped. Our own (actual and possible) experiences fall somewhere in this multi-dimensional space. The bat's experiences will occur somewhere within this space too. (2) First problem: what is the structure of this phenomenal space? At least two parts to that problem: (a) what is the structure of our own phenomenal space? (b) how does the structure of our phenomenal experience relate to that of any possible phenomenal experience? (3) Second problem: where on this phenomenal map does the bat fall? Tough one, but interestingly distinct from (2). (4) First thesis: having experience x is not sufficient for knowing what it's like to experience x. Knowing what it's like to experience x involves understanding the dimensional structure of x, but one can have x without appreciating this dimensional structure. A being that only experienced a single shade of blue might have no appreciation of the structure of color. (5) Second thesis: Having an understanding of some structural aspect of experience x might enable you to partially know what it's like to experience x. The sound of trumpets and red might share a high-level structural feature, which makes this analogy partially illuminating. (6) Third thesis: The richer your understanding of the structure of an experience, the closer you can get to understanding it without having it. Thus, Hume on the missing shade of blue: if you have experienced the other shades, then you probably have an understanding of most of the dimensions that will structure the missing shade. (7) Fourth thesis: An objective phenomenology enables you to go from understanding what it's like to experience x to understanding what it's like to experience anything else (perhaps given that there's some structural similarility), but it doesn't bootstrap you into understanding phenomenal states from an absence of phenomenal states. That is, if we had this objective phenomenal map, we could understand what it's like to be a bat (or anything else), but zombies couldn't use this map, 'cos they can't get positioned on it at all. Objective phenomenalogy allows you to get from one phenomenal state to others, it doesn't allow you to go form merely physical states to phenomenal states. Thus, Mary can use it to know what it's like to see red (without seeing red), but Zombie Mary can't. tim Timothy J. Bayne RM. 213 Social Science Department of Philosophy University of Arizona Tucson, AZ 85721 USA Hm ph. (520) 298 1930 From lnielsen@azstarnet.com Sun Nov 7 14:51:24 1999 Return-Path: Received: from Arizona.EDU (Hopey.Telcom.Arizona.EDU [128.196.128.234]) by paradox.soc-sci.arizona.edu (8.9.3/8.9.3) with ESMTP id OAA20575 for ; Sun, 7 Nov 1999 14:51:24 -0700 Received: from DIRECTORY-DAEMON by Telcom.Arizona.EDU (PMDF V5.2-31 #39830) id <01JI261G2Z1S9H0QSV@Telcom.Arizona.EDU> for chalmers@paradox.soc-sci.arizona.edu (ORCPT rfc822;chalmers@arizona.edu); Sun, 7 Nov 1999 14:52:23 MST Received: from paradox.soc-sci.arizona.edu by Telcom.Arizona.EDU (PMDF V5.2-31 #39830) with ESMTP id <01JI261FPP289JJCYE@Telcom.Arizona.EDU> for chalmers@Arizona.EDU; Sun, 07 Nov 1999 14:52:22 -0700 (MST) Received: from cepheus.azstarnet.com (cepheus.azstarnet.com [169.197.56.195]) by paradox.soc-sci.arizona.edu (8.9.3/8.9.3) with ESMTP id OAA20571 for ; Sun, 07 Nov 1999 14:51:15 -0700 Received: from zippo (dialup002ip475.tus.azstarnet.com [169.197.15.219]) by cepheus.azstarnet.com (8.9.3+blt.Beta0/8.9.3) with SMTP id OAA28891 for ; Sun, 07 Nov 1999 14:52:11 -0700 (MST) Date: Sun, 07 Nov 1999 14:58:14 -0700 From: Lis Nielsen Subject: Animal Consciousness To: scicon@paradox.soc-sci.arizona.edu Reply-to: lnielsen@u.arizona.edu Message-id: MIME-version: 1.0 X-MIMEOLE: Produced By Microsoft MimeOLE V5.00.2314.1300 X-Mailer: Microsoft Outlook IMO, Build 9.0.2416 (9.0.2910.0) Content-type: text/plain; charset=iso-8859-1 Content-transfer-encoding: 8BIT Importance: Normal X-Priority: 3 (Normal) X-MSMail-priority: Normal X-Sent-via: StarNet http://www.azstarnet.com/ Status: RO Part 1. Gallup is confused, and it's not about consciousness at all. The term consciousness is used loosely in Gallup's article, and the real debate between Gallup and Povinelli concerns whether the animals they study have a theory of mind or 'empathy' (also rather loosely conceived). Specifically, do they have the sort of self-concepts that allow for understanding of their own mental states as mental states, and can they extend this knowledge to encompass an appreciation that other creatures like themselves have mental states like their own? The relationship between theory of mind and consciousness, according to Gallup, seems at first to go something like this: If animals have a theory of other minds, then they must first have a theory of their own minds, and having such a theory makes them likely possessors of consciousness (presumably because if they're thinking about all these mental states, they must be doing it consciously). Later, in some sweeping claims at the end of the article, Gallup equates consciousness and self-consciousness. He suggests that creatures without self-consciousness may have "clever brains but blank minds." Having a blank mind is more than not having a theory of other minds; it is compared to sleepwalking or blindsight, states in which sensory inputs can influence behavior, without any phenomenal consciousness of that input. Blank minds are really phenomenally empty. The link between self-consciousness and phenomenal consciousness turns out to be causal, according to Gallup, who (revealing his Cartesian allegiance) writes, "Being able to conceive of oneself in the first place is what makes consciousness and thinking possible." Povinelli's statement of the focus of the debate is more clear, and he avoids the term consciousness altogether. He asks, "[W]hat excludes the possibility that evolution has simply produced "mind-blind" mechanisms that lead social primates to look where other animals look, without entertaining any ideas about their visual perspective?" Why ascribe a theory of mind to these creatures, he asks, when a more parsimonious explanation of their behaviors will suffice? He offers the idea of a kinesthetic self-concept to account for the behavior of animals in the mirror task, rejecting interpretations of this ability that infer an understanding of psychological states as psychological states on the animal's part. Povinelli doesn’t seem to share Gallup's intuitions about the relationship between theory of mind and consciousness. What the animals lack, he says, is the ability to use their own experiences (phenomenally conscious, I presume) to make inferences about the experiences of others. Povinelli might interpreted as sharing Nagel's assumption that animals have conscious experiences. Part 2. But since Gallup brought it up, let's talk about it anyway. What does this discussion reveal about debates about animal consciousness? In particular, what exactly are these investigators imagining when they talk about "blank minds" filled with "mind-blind" mechanisms? Are such minds phenomenally conscious, but merely lacking in a theory of other minds? Or lacking a theory of mind, are these animals not conscious at all, as Gallup suggests? There are some fundamental assumptions lurking behind the idea that the burden of proof in discussions of animal consciousness lies with those who would argue for it, as opposed to those who would argue against. Morgan's Cannon, as we learned on Tuesday, cautions us to be parsimonious. We should refrain from offering complex explanations when more simple explanations suffice, therefore, we should not claim that consciousness exists in animals if simpler explanations can account for the observed behaviors. The assumption here is that there is an ascending hierarchy of mental capabilities or functions, with consciousness at the top. On Gallup's view we have consciousness, self-consciousness and consciousness of other minds stacked at the top. Beneath this top level are abilities like learning, memory, and so on that can occur in blank minds. Phenomenal consciousness does not constitute a level of its own. For Povinelli, phenomenal consciousness may have a its own level, beneath the level where we find theory of mind and self-consciousness. Both investigators make the common assumption that many mental functions are mere mechanisms requiring no awareness for their function, mechanisms that inhabit lower levels of the hierarchy. Animal minds should be explained at the lowest level of the hierarchy that can account for the observed behaviors. This highly mechanistic view of animal minds is a direct inheritance from certain recurrent themes in our western intellectual tradition. It has ancient roots (both in Greek philosophy and Christianity) in a form of speciesism that places humans at the top of the psychological and biological heap. In Descartes, this view takes the form of a mind-body dualism in which our mental nature is markedly different from and superior to our bodily nature. When cast in evolutionary terms it is the view that human consciousness is the most highly evolved mental function in the animal kingdom. Combined with the notion that animals don't have minds (or souls) at all, such views helped to justify the beginnings of animal experimentation. We essentially have two parallel hierarchies that are assumed to map onto one another. The biological hierarchy has humans at the top, followed by other primates, then 'lower animals' of all sorts. The psychological hierarchy has consciousness at the top (maybe self-consciousness above it), complex cognition (memory, learning, etc.) below, and behavior at the bottom. The assumptions here are that the higher levels are more complex. This assumption is incorporated into cognitive models of consciousness that place consciousness at the center or top of the information processing system, and in evolutionary explanations of consciousness that argue for its role in dealing with the demands of more complex information processing systems interacting with more complex environments. In a recent article in the Journal of Consciousness Studies (1998, Volume 5, 3) entitled "Consciousness: A Natural History", Maxine Sheets-Johnstone questions the correctness of these hierarchical conceptions and of the assumption that "unconsciousness historically preceded consciousness" in animals. She suggests that proprioception may be the first evolved form of consciousness. The evolution of proprioception, she proposes, parallels the evolution of animate forms, such that from the very beginning of the ability of organisms to move, there was a need for a kind of flexible responsivity to external stimuli. It is arbitrary, she argues, to call this responsivity behavioral or cognitive when referring to 'lower animals' and conscious when referring to humans or 'higher animals.' The fact that this is frequently done has much to do, she claims, with our brain-centered notions of consciousness that disregard more embodied sensory abilities. She notes that the first human sense to develop is proprioception (it develops prenatally with the early development of motor pathways), and it is through this sense that we initially come to learn to move our bodies and to feel ourselves. This is a sense that we share with many 'simple' creatures. Sheets-Johnstone provides a description of the proprioceptive abilities of invertebrates that makes the assumption that unconscious mechanisms explain the behaviors of these 'lower animals' look disturbingly ad hoc. In response to Dennett, who claims that in simple organisms "there is really nothing much to self-knowledge beyond the rudimentary biological wisdom enshrined in such maxims as 'When Hungry, Don't Eat Yourself!' and 'When there's a Pain, It's Yours!'" she questions the parsimony of explaining animal behaviors in terms of such mechanisms. I'll close with the following quotation in which she makes this point: "[W]e should ask what it means to say that a lobster will eat another's claws but that conveniently, as Dennett puts it, it finds eating one of its own claws unthinkable. Does it mean that there is actually a rule 'Don't eat your own claws!' wired into the lobster's neurological circuitry? But it is patently unparsimonious to think that there is such a rule and just as patently absurd to think that every creature comes prepared with an owner's manual, as it were, a rulebook replete with what Dennett calls 'maxims'. Such a maxim, for example would be only one of an indefinitely great number of maxims that a lobster (or, in analogous terms, any other 'simpler organism') could be said to carry around in the neural machinery that counts as its 'Headquarters'; 'Don't try to go on land!' 'Don't try to eat a squid!' 'Shovel in new sand grains after molting!' 'The large claw is for crushing!' 'The small claw is for seizing and tearing!' And so on. … What makes eating its own claws 'conveniently unthinkable' is clearly something other than a rule of conduct. 'Convenience' is not a matter of an opportune adaptation but of an astioundingly varied and intricately detailed biological faculty that allows a creature to know its own body and its own body in movement. … These kinetic cognitional abilities constitute a corporeal consciousness. … A moment's serious reflection … discloses a major reason why … sensitivity to movement is both basic and paramount: no matter what the particular world in which an animal lives, it is not an unchanging world. Hence, whatever the animal, its movement cannot be absolutely programmed such that, for example, at all times its particular speed and direction of movement, its every impulse and stirring, its every pause and stillness, run automatically on something akin to a lifetime tape" (Sheets-Johnstone, 1998, pp. 274-8). Offering mechanistic explanations for animal behaviors may reveal more about one's commitment to certain assumptions about the mappings between certain presupposed biological and psychological hierarchies of complexity than it does about one's commitment to parsimony of explanation. ------------------------ Lis Nielsen Department of Psychology University of Arizona Tucson, AZ 85721-0068 USA lnielsen@u.arizona.edu From press@U.Arizona.EDU Sun Nov 7 16:52:56 1999 Return-Path: Received: from Arizona.EDU (Maggie.Telcom.Arizona.EDU [128.196.128.233]) by paradox.soc-sci.arizona.edu (8.9.3/8.9.3) with ESMTP id QAA20655 for ; Sun, 7 Nov 1999 16:52:56 -0700 Received: from DIRECTORY-DAEMON by Telcom.Arizona.EDU (PMDF V5.2-31 #39830) id <01JI2AA4BP3K9H1B93@Telcom.Arizona.EDU> for chalmers@paradox.soc-sci.arizona.edu (ORCPT rfc822;chalmers@arizona.edu); Sun, 7 Nov 1999 16:53:55 MST Received: from paradox.soc-sci.arizona.edu by Telcom.Arizona.EDU (PMDF V5.2-31 #39830) with ESMTP id <01JI2AA3O4FK9JJTXD@Telcom.Arizona.EDU> for chalmers@Arizona.EDU; Sun, 07 Nov 1999 16:53:54 -0700 (MST) Received: from f1n2.u.arizona.edu (IDENT:press@f1n2.U.Arizona.EDU [128.196.137.102]) by paradox.soc-sci.arizona.edu (8.9.3/8.9.3) with ESMTP id QAA20650 for ; Sun, 07 Nov 1999 16:52:46 -0700 Received: from localhost (press@localhost) by f1n2.u.arizona.edu (8.8.8/8.8.8) with ESMTP id QAA33180 for ; Sun, 07 Nov 1999 16:53:45 -0700 Date: Sun, 07 Nov 1999 16:53:45 -0700 (MST) From: Joel K Press Subject: Mirror Test To: scicon@paradox.soc-sci.arizona.edu Message-id: MIME-version: 1.0 Content-type: TEXT/PLAIN; charset=US-ASCII Status: RO All, Here are some thoughts regarding the hypothesis (as discussed during our last class) that a basic understanding of mirrors plus what we were calling a self-schema might allow chimps to pass the Mirror test just as well as a full-fledged self concept. Dave seemed intrigued by this idea, but rightly pointed out that it was unclear whether the difference between the two views is empirically detectable. During the break, Juraj and I tossed around a few ideas about how an empirical test might work, but didn't really get very far. After a bit more thought, though, I think something along the lines of what we were talking about might work. To clarify the hypothesis, the idea is that mirrors give a sighted creature visual information about areas of space that are not in the direction the creature's eyes are pointing. In particular, they give us information about areas of space defined by Newton's laws of optics (angle of incidence = angle of reflection). Obviously, chimps don't understand optics, but they might be able to learn by trial and error how to coordinate mirror information with the rest of their visual information in such a way that they could locate objects seen in a mirror, grasp them, and so on. If so, locating the blue spot on their forehead is just a special case of using mirrors to locate things. Once the chimp realizes that the spot is located in a region of space that coincides with its skin, the self-schema can take over. That is, chimps may just be designed in such a way that they take a special interest in the region of space that coincides with their bodies without realizing that it is "their" body. Falsification of this hypothesis doesn't seem to be a problem. Since the hypothesis sees success in the Mirror test as just a special case of using mirrors to locate things, if chimps can't generally use mirrors to locate things other than themselves, it would be clear that something else was involved in the Mirror test. We might not want to require that they be able to use mirrors at all angles, since arguably the information they get from a mirror at, say, 45 degrees to their line of sight is harder to interpret than the information they get from a mirror that's close to perpendicular to their line of sight. But if it turns out that a chimp can locate spots on his body using the mirror, but not the bananna right next to him (hidden from direct line of sight) that would tend to disprove the claim that chimps have a general ability to understand the visual information in mirrors. Confirming the hypothesis seems to be the hard part. The general trend in the ideas that Juraj and I discussed involved introducing various sorts of degraded visual conditions. The idea was that one of the two competing methods (self-concept vs. self-schema plus an understanding of mirrors) was likely to be harder to adapt to degraded conditions than the other. If so, then we might be able to figure out which method was being used from the way that performance dropped off as a function of the visual degradation. One way of degrading the visual conditions would be to introduce distorting mirrors (i.e. like the ones in fun-houses at carnivals). Of course, the problem is that I have no idea which of these methods is more/less adaptable to these sorts of conditions. Juraj and I tried to come up with a priori reasons to think that one task would be harder than the other, but didn't come up with anything convincing. The alternative I've come up with is to use human performance as a baseline for the relative difficulty of the adaptive tasks. Presumably, human beings have both a self-concept and a self-schema with mirror-understanding. If we can test these abilities separately in humans, we might get a quasi-objective measure of which type of process is more adaptable to fun-house mirrors. Then we test the chimps and compare their drop in performance to the way that performance dropped off in humans to see whether it was more like the drop off on the self-concept test or the self-schema test. Here's what I have in mind: 1. Human self-concept test. We can't use humans looking at their own reflections in distorted mirrors, because even the densest human understands mirrors well enough know that the distorted image is of himself, even if it is totally unrecognizable. Instead, we set up a hidden video camera right on top of a monitor which displays its output. Now the subject can't rely on his knowledge of the mechanism producing the image to identify it. Of course, as soon as he does identitify it, he will realize that there is a camera somewhere, but by then we have our data. This task should be tried with varying amounts of distortion, preferably distortion that mimics the distorted mirrors used in the rest of the test. 2. Human mirror location test. Set the mirror up in such a way that the subject has to use the mirror to locate and grasp some object that he can't see directly. Again, we try this with increasingly distorted mirrors. The data we get from these two trials should be a learning curve for each of the tasks. How long does it take the subjects to figure out how to use the new (increasingly) distorted mirrors to accomplish 1 and 2? Presumably, both tasks will get more difficult as the mirrors get more distorted. Hopefully, they will get more difficult in characteristically different ways. For example, one ability might decline geometrically, while the other declines linearly, or whatever. Then, it's the chimps' turn: 3. Give the Mirror test to the chimps using increasingly distorted mirrors. Again, their performance on the test should get worse as the test gets harder. If we are lucky, their learning curve for this task will strongly resemble one, but not the other, of the curves obtained from 1 or 2. We would then have at least some evidence that the chimp was using that method. Obviously, there are some serious experimental difficulties (comparing the human and chimp data, getting curves that are clearly different enough to strongly support one hypothesis over the other, just to name a couple). The whole thing may just be unworkable, but I thought I'd throw it out in the open where other more experimentally minded folks might either find ways to improve it or put it out of its misery. Suggestions? Experimentally, Joel From sawright@U.Arizona.EDU Sun Nov 7 21:50:51 1999 Return-Path: Received: from Arizona.EDU (Maggie.Telcom.Arizona.EDU [128.196.128.233]) by paradox.soc-sci.arizona.edu (8.9.3/8.9.3) with ESMTP id VAA20819 for ; Sun, 7 Nov 1999 21:50:51 -0700 Received: from DIRECTORY-DAEMON by Telcom.Arizona.EDU (PMDF V5.2-31 #39830) id <01JI2KOIE1SW9H01WC@Telcom.Arizona.EDU> for chalmers@paradox.soc-sci.arizona.edu (ORCPT rfc822;chalmers@arizona.edu); Sun, 7 Nov 1999 21:51:51 MST Received: from paradox.soc-sci.arizona.edu by Telcom.Arizona.EDU (PMDF V5.2-31 #39830) with ESMTP id <01JI2KOI2QMO9JJEJD@Telcom.Arizona.EDU> for chalmers@Arizona.EDU; Sun, 07 Nov 1999 21:51:50 -0700 (MST) Received: from f1n3.u.arizona.edu (IDENT:sawright@f1n3.U.Arizona.EDU [128.196.137.103]) by paradox.soc-sci.arizona.edu (8.9.3/8.9.3) with ESMTP id VAA20815 for ; Sun, 07 Nov 1999 21:50:47 -0700 Received: from localhost (sawright@localhost) by f1n3.u.arizona.edu (8.8.8/8.8.8) with ESMTP id VAA37540 for ; Sun, 07 Nov 1999 21:51:43 -0700 Date: Sun, 07 Nov 1999 21:51:43 -0700 (MST) From: Sarah A Wright Subject: Evolutionary explanation To: scicon@paradox.soc-sci.arizona.edu Message-id: MIME-version: 1.0 Content-type: TEXT/PLAIN; charset=US-ASCII Status: RO Since I had to miss out on the discussion of Tuesday, due to illness, I hope that these comments aren't too far off the discussion there. They are a bit tangential, but I think of general interest for assessing the evidential support of primate studies. In the section "The Meaning of Self-Recognition" Povinelli ties his research together, and comes to the conclusion that great apes have a Kinesthetic self-concept, and not a robust psychological self-concept. After reaching this conclusion he tacks on an extra theory about the evolutionary source of a kinesthetic self-concept. My concern is that this add-on is not just another theory which is related and supported by the same evidence, but rather used an a support for Povinelli's primary conclusion. If Povinelli wants the evolutionary theory to play this supporting role, then I think that we must analyze the sort and strength of support that such explanations can give to a theory of current processing. So we have a story according to which it is of benefit to the great apes to develop a kinesthetic self-concept. They got so big, so fast, maneuvering around in the treetops required a new self-regulation, and a kinesthetic self-concept can play the regulating role. Further it is all that is needed; a psychological self-concept does not help one swing from the trees any better. But what about gorillas? They are an anomaly, being both large, and failing the self-recognition tasks. Is this a counterexample? No, since gorillas are primarily land-dwelling. As land dwellers, they lost the need for fine-grained self-regulation, and lost the kinesthetic self concept. Thus Povinelli's theory can explain the source of the self- concept and its exceptions, and this seems like an additional virtue of the theory. But is it? While kinesthetic self-regulation can be worked into a historical story, so can psychological self-regulation. Even if the simpler form of self-regulation will do for regulating tree swinging behavior, that does not count against possibility of a more complex form. Only if we accept a further claim (which is often not clearly articulated in these sorts of arguments) that natural selection will pick the minimal way of achieving an ability. To argue in this way is to ignore the real possibility of spandrels, or free-riders, in the evolutionary game. But while certain theories play this possibility up or down, none deny that we can have an occasional spandrel; appeals to simplicity (particularly when the alternatives are close in complexity) can't be decisive. While this objection doesn't undermine Povinelli's other arguments, I do think that the introduction of evolutionary explanations (here and elsewhere) can be done in such a way to suggest support without actually giving any. Perhaps support isn't Povinelli's aim in giving his evolutionary explanation. But I think in this case, and in others where support is suggested, we should be critical. Any comments? Sarah From serobert@U.Arizona.EDU Sun Nov 7 23:40:14 1999 Return-Path: Received: from Arizona.EDU (Hopey.Telcom.Arizona.EDU [128.196.128.234]) by paradox.soc-sci.arizona.edu (8.9.3/8.9.3) with ESMTP id XAA21099 for ; Sun, 7 Nov 1999 23:40:14 -0700 Received: from DIRECTORY-DAEMON by Telcom.Arizona.EDU (PMDF V5.2-31 #39830) id <01JI2OI40UHS9H1JGZ@Telcom.Arizona.EDU> for chalmers@paradox.soc-sci.arizona.edu (ORCPT rfc822;chalmers@arizona.edu); Sun, 7 Nov 1999 23:41:13 MST Received: from paradox.soc-sci.arizona.edu by Telcom.Arizona.EDU (PMDF V5.2-31 #39830) with ESMTP id <01JI2OI3M9XC9JIZBD@Telcom.Arizona.EDU> for chalmers@Arizona.EDU; Sun, 07 Nov 1999 23:41:12 -0700 (MST) Received: from orion.U.Arizona.EDU (orion.U.Arizona.EDU [128.196.137.206]) by paradox.soc-sci.arizona.edu (8.9.3/8.9.3) with ESMTP id XAA21094 for ; Sun, 07 Nov 1999 23:40:07 -0700 Received: from localhost (serobert@localhost) by orion.U.Arizona.EDU (8.8.6 (PHNE_17190)/8.8.6) with ESMTP id XAA27243; Sun, 07 Nov 1999 23:41:07 -0700 (MST) Date: Sun, 07 Nov 1999 23:41:07 -0700 (MST) From: Simon E Roberts-Thomson Subject: Povinelli In-reply-to: To: Sarah A Wright Cc: scicon@paradox.soc-sci.arizona.edu Message-id: MIME-version: 1.0 Content-type: TEXT/PLAIN; charset=US-ASCII Status: R I wanted to address some of Sarah's concerns. When Povinelli is talking about kinesthetic self-concept, it actually seems as though he has two arguments in mind, and it is not at all clear that they are compatible. (2) Heavy Tree Dwellers vs Light Tree Dwellers. According to Povinelli, those animals which are relatively heavy (ie. 40-80kg) encounter "qualitatively different" problems when moving from tree to tree than do those animals which are lighter. It is not made clear as to exactly what these problems are, but presumably they have something to do with the fact that there are not as many options (with respect to those parts of the tree that can be safely held) for a heavy animal as there are for a light animal. Povinelli hypothesises that the increase in body size of the great apes might have necessitated a "high-level self-representational system" in order to plan their inter- and intra-arboreal movements. This system of self-representation is a "kinestheitc self-concept". Thus self-concept is "an explicit representation of the position and movement of their own bodies". (2) Tree Dwellers vs Ground Dwellers. Those great apes which did not remain in the trees will presumably not have required the kinesthetic self-concept. If they are primarily ground-dwellers, then such a self-representation (which is designed to enable inter- and intra-arboreal movement) will be superfluous. Thus we can see how gorillas would not have needed to acquire such a self-concept. It seems reasonable to assume that tree dwellers need a kinesthetic self-concept that is adequate to enable them to move through trees (although not literally of course!). The idea that this self-concept needs to be somehow better than ground-dweller's seems to me to be in need of further justification, beyond that given in the paper (I don't know - has this been done elsewhere?). Likewise, the idea that heavy animals need a greater self-concept than lighter ones also seems not uncontroversial. I take it, however, that (2) is the more controversial. According to the logic of (2), it would seem that humans should have less of a kinesthetic concept than do orangutan's, in just the same way as gorillas. Povinelli, however, says that "[h]umans, in contrast, slowed down their growth ate, allowing more years for cognitive development". Nut why would slowing growth enable cognitive development? It seems in (1) that Povinelli is saying that it is *increased* growth, along with the issues involved with being a tree-dweller that gave rise to the kinesthetic self-concept. Now, however, he asserts that *slower* growth, combined with living on the ground lead to increased cognitive development. It is far from clear that these claims are compatible. In (1), the reason why the kinesthetic self-concept arised was because of the increase in size. In (2) it seems as though cognitive development is possible only given a lack of growth. What is going on here? Perhaps when Povinelli talks about a kinesthetic self-concept, he is talking about something different from cognitive development. But if this is so, then it would seem that humans are a counter-example to his argument. If not, then it is unclear what force his argument has. Any suggestions? Simon. From bradt@U.Arizona.EDU Mon Nov 8 02:19:13 1999 Return-Path: Received: from Arizona.EDU (Maggie.Telcom.Arizona.EDU [128.196.128.233]) by paradox.soc-sci.arizona.edu (8.9.3/8.9.3) with ESMTP id CAA21167 for ; Mon, 8 Nov 1999 02:19:13 -0700 Received: from DIRECTORY-DAEMON by Telcom.Arizona.EDU (PMDF V5.2-31 #39830) id <01JI2U28M14W9H1AAA@Telcom.Arizona.EDU> for chalmers@paradox.soc-sci.arizona.edu (ORCPT rfc822;chalmers@arizona.edu); Mon, 8 Nov 1999 02:20:13 MST Received: from paradox.soc-sci.arizona.edu by Telcom.Arizona.EDU (PMDF V5.2-31 #39830) with ESMTP id <01JI2U28771C9JK7OI@Telcom.Arizona.EDU> for chalmers@Arizona.EDU; Mon, 08 Nov 1999 02:20:12 -0700 (MST) Received: from orion.U.Arizona.EDU (orion.U.Arizona.EDU [128.196.137.206]) by paradox.soc-sci.arizona.edu (8.9.3/8.9.3) with ESMTP id CAA21162 for ; Mon, 08 Nov 1999 02:19:04 -0700 Received: from localhost (bradt@localhost) by orion.U.Arizona.EDU (8.8.6 (PHNE_17190)/8.8.6) with ESMTP id CAA14613; Mon, 08 Nov 1999 02:20:04 -0700 (MST) Date: Mon, 08 Nov 1999 02:20:04 -0700 (MST) From: Brad J Thompson Subject: Re: Animal Consciousness In-reply-to: To: lnielsen@U.Arizona.EDU Cc: scicon@paradox.soc-sci.arizona.edu, bradt@U.Arizona.EDU Message-id: MIME-version: 1.0 Content-type: TEXT/PLAIN; charset=US-ASCII Status: R I agree completely with Lis' contention that Gallup's assimilation of consciousness with what he calls "self-consciousness" is confused and more importantly mistaken. If "self-consciousness" in his mouth refers to the kind of thing that the mirror test *might* test for (having a self concept etc.) then self-consciousness has nothing fundamental to do with consciousness. [I add the qualification of "fundamental" here because of course if the deployment of a self concept or a theory of mind is a conscious activity, then it will trivially have *something* to do with consciousness]. With her point against Povinelli and with her discussion of the assumptions regarding complexity, I think Lis is pointing to the fact that questions of animal consciousness are distinct from questions of the complexity of animal cognition. The question of theory of mind in animals, along with matters regarding representation and intentionality, belong in the second category. It is easy to want to draw conclusions about the presence of consciousness from the degree of representational complexity in an animal species. I know that reading about vervet monkeys in Cheney and Seyfarth's "How Monkeys See the World" (which I highly recommend) had this effect on me, as did seeing a documentary about orangutans. It appears that we pre-theoretically do base our assessment of consciousness on representational or behavioral complexity. The question is whether this is legitimate. I think that there has to be *some* relationship here. One might start by saying that some sort of representational capacity is required for consciousness--maybe only representations are the kinds of things that can be conscious. This would atleast justify not attributing consciousness to rocks. And it might justify the position that anything that is merely a behaviorist organism with no internal representations lacks consciousness. I think this might still be compatible with Lis' suggestion that crabs are conscious, depending on what constraints we place on counting as a representation. But it looks like we need something like this connection between consciousness and representation in order to avoid panpsychism. I think we also need some kind of assumption about the relationship between consciousness and representation or cognitive complexity in order to even begin an empirical investigation of animal consciousness. I know that I'm conscious directly, and I assume that other human beings are conscious because they are physically similar to me. But what is my inroad to asking whether my cat is conscious? I could notice that she has a brain and base my answer on that--but this would be due to an assumption about what it is that makes me conscious (namely, having a brain). But you might think that having a brain makes me conscious because of what my brain *does*, and something other than a bunch of neurons could perform the same function. And one of the things that my brain does, and which seems closely connected to consciousnes, is that it represents states in the environment. So maybe it is this representational capacity that we should look for in non-human animals in order to determine if they are conscious. Perhaps increased degrees of representational complexity are correlated with different types or degrees of consciousness. This would be a very substantive claim, but if it were true it might justify looking into theory of mind as an insight into something relating to consciousness. Unfortunately I don't think this is what is going on with Gallup's reasoning. This leads me to what may be a point of disagreement with Lis. I would want to defend Morgan's Cannon and its use in the case of the mirror test. But this is separable from the first issue Lis points to about conflating consciousness with empathy or self-consciousness. Maybe Gallup is assuming that there is a hierarchy and that consciousness is at the top (such that "lower levels" don't involve consciousness) but this does not follow from his adherence to Morgan's Cannon alone. The placement of consciousness at the top of the purported hierarchy would be the culprit here in my opinion. But the idea of a hierarchy is not problematic, assuming that all is meant here is degree of complexity, etc. Perhaps the term "hierarchy" has moral connotations that are misleading--I would suggest abandoning the term then and just speaking of computational complexity. But it seems to me that Morgan's Cannon is appropriate for asking questions about the computational complexity of various cognitive activities in animals AND in humans. But I agree with Lis that we should be careful about making assumptions about where consciousness fits in with this matter of computational complexity. Let me emphasize again that I think we should not attribute more complexity than necessary to explain behavior to either animals or humans. This is just Occam's Razor, as Dave mentioned. So my wanting to defend Morgan's Cannon is not the result of human-centrism. And as I suggested earlier, this is especially true given my agreement with Lis that consciousness is not some extra thing at the top of a hierarchy which only humans and maybe "higher" animals possess. Even after applying Morgan's Cannon and offering a "lower level" (in info-processing terms) explanation of an animal's behavior, we might *still* want to attribute consciousness to that animal and even to the processes in question. Brad ---------------------- Brad J Thompson bradt@U.Arizona.EDU From bayne@U.Arizona.EDU Mon Nov 8 08:48:57 1999 Return-Path: Received: from Arizona.EDU (Maggie.Telcom.Arizona.EDU [128.196.128.233]) by paradox.soc-sci.arizona.edu (8.9.3/8.9.3) with ESMTP id IAA21447 for ; Mon, 8 Nov 1999 08:48:56 -0700 Received: from DIRECTORY-DAEMON by Telcom.Arizona.EDU (PMDF V5.2-31 #39830) id <01JI37NFOXS09JKEH4@Telcom.Arizona.EDU> for chalmers@paradox.soc-sci.arizona.edu (ORCPT rfc822;chalmers@arizona.edu); Mon, 8 Nov 1999 08:49:56 MST Received: from paradox.soc-sci.arizona.edu by Telcom.Arizona.EDU (PMDF V5.2-31 #39830) with ESMTP id <01JI37NF0W809JKBPU@Telcom.Arizona.EDU> for chalmers@Arizona.EDU; Mon, 08 Nov 1999 08:49:55 -0700 (MST) Received: from f1n7.u.arizona.edu (IDENT:bayne@f1n7.U.Arizona.EDU [128.196.137.107]) by paradox.soc-sci.arizona.edu (8.9.3/8.9.3) with ESMTP id IAA21442 for ; Mon, 08 Nov 1999 08:48:42 -0700 Received: from localhost (bayne@localhost) by f1n7.u.arizona.edu (8.8.8/8.8.8) with ESMTP id IAA44196 for ; Mon, 08 Nov 1999 08:49:42 -0700 Date: Mon, 08 Nov 1999 08:49:42 -0700 (MST) From: Timothy J Bayne Subject: Re: Mirror Test In-reply-to: Cc: scicon@paradox.soc-sci.arizona.edu Message-id: MIME-version: 1.0 Content-type: TEXT/PLAIN; charset=US-ASCII Status: R If you're interested in the mirror test, you may be interested in the following papers: (1) Robert W. Mitchell (1997) "Kinesthetic-Visual Matching and the Self-concept as Explanations of Mirror Self-Recognition" Journal for the Theory of Social Behavior 27/1, 17-39. Mitchell's paper is a nice discussion of both Gallop and Povinelli; he rejects their accounts for a leaner model of what's going on. (2)Lawrence Davis, (1989) "Self-consciousness in Chimps and Piegeons", Philosophical Psychology, 2/3, pp. 249-59. (U of A library doesn't have this volume.) tim Timothy J. Bayne RM. 213 Social Science Department of Philosophy University of Arizona Tucson, AZ 85721 USA Hm ph. (520) 298 1930 From press@U.Arizona.EDU Mon Nov 8 19:55:52 1999 Return-Path: Received: from Arizona.EDU (Maggie.Telcom.Arizona.EDU [128.196.128.233]) by paradox.soc-sci.arizona.edu (8.9.3/8.9.3) with ESMTP id TAA23088 for ; Mon, 8 Nov 1999 19:55:52 -0700 Received: from DIRECTORY-DAEMON by Telcom.Arizona.EDU (PMDF V5.2-31 #39830) id <01JI3UYBRDJ49H1AZJ@Telcom.Arizona.EDU> for chalmers@paradox.soc-sci.arizona.edu (ORCPT rfc822;chalmers@arizona.edu); Mon, 8 Nov 1999 19:56:53 MST Received: from paradox.soc-sci.arizona.edu by Telcom.Arizona.EDU (PMDF V5.2-31 #39830) with ESMTP id <01JI3UYBEMC09JJQOK@Telcom.Arizona.EDU> for chalmers@Arizona.EDU; Mon, 08 Nov 1999 19:56:52 -0700 (MST) Received: from f1n3.u.arizona.edu (IDENT:press@f1n3.U.Arizona.EDU [128.196.137.103]) by paradox.soc-sci.arizona.edu (8.9.3/8.9.3) with ESMTP id TAA23083 for ; Mon, 08 Nov 1999 19:55:39 -0700 Received: from localhost (press@localhost) by f1n3.u.arizona.edu (8.8.8/8.8.8) with ESMTP id TAA51280; Mon, 08 Nov 1999 19:56:36 -0700 Date: Mon, 08 Nov 1999 19:56:36 -0700 (MST) From: Joel K Press Subject: Re: Povinelli In-reply-to: To: Simon E Roberts-Thomson Cc: Sarah A Wright , scicon@paradox.soc-sci.arizona.edu Message-id: MIME-version: 1.0 Content-type: TEXT/PLAIN; charset=US-ASCII Status: R All Regarding the apparent contradiction between the human possession of a self-concept and the gorillas' apparent lack of one, it seems there is a perfectly good evolutionary explanation for this. It is true that both humans and gorillas are land dwellers rather than tree dwellers. So, by Povinelli's argument, neither has a need for a self-concept. However, gorillas diverged from humans earlier than chimpanzees, which do live in trees. So our self-concept may have developed after our gorilla cousins had already "descended from the trees." When our ancestors adopted a non-tree lifestyle, they already had a self-concept, unlike the gorillas. At first, this self-concept might not have been essential to our survival, but self-concepts are generally useful, so as long as the cost of keeping it isn't too high, it probably won't be bread out. In fact, Povinelli would probably argue that our purportedly more robust sense of self evolved out of the old one because it turned out to be the case that self-concepts are valuable in a wide variety of environments. Joel From switanek@U.Arizona.EDU Tue Nov 9 01:51:33 1999 Return-Path: Received: from Arizona.EDU (Maggie.Telcom.Arizona.EDU [128.196.128.233]) by paradox.soc-sci.arizona.edu (8.9.3/8.9.3) with ESMTP id BAA23811 for ; Tue, 9 Nov 1999 01:51:32 -0700 Received: from DIRECTORY-DAEMON by Telcom.Arizona.EDU (PMDF V5.2-31 #39830) id <01JI47DBEQDC9H0DW0@Telcom.Arizona.EDU> for chalmers@paradox.soc-sci.arizona.edu (ORCPT rfc822;chalmers@arizona.edu); Tue, 9 Nov 1999 01:52:34 MST Received: from paradox.soc-sci.arizona.edu by Telcom.Arizona.EDU (PMDF V5.2-31 #39830) with ESMTP id <01JI47DB2MKG9JJYRC@Telcom.Arizona.EDU> for chalmers@Arizona.EDU; Tue, 09 Nov 1999 01:52:33 -0700 (MST) Received: from pavo.U.Arizona.EDU (pavo-2.U.Arizona.EDU [128.196.137.196]) by paradox.soc-sci.arizona.edu (8.9.3/8.9.3) with ESMTP id BAA23806 for ; Tue, 09 Nov 1999 01:51:02 -0700 Received: from localhost (switanek@localhost) by pavo.U.Arizona.EDU (8.8.6 (PHNE_17190)/8.8.6) with ESMTP id BAA02577; Tue, 09 Nov 1999 01:52:03 -0700 (MST) Date: Tue, 09 Nov 1999 01:52:03 -0700 (MST) From: Nicholas J Switanek Subject: Re: Animal Consciousness and Nagel's answer In-reply-to: To: lnielsen@U.Arizona.EDU Cc: scicon@paradox.soc-sci.arizona.edu Message-id: MIME-version: 1.0 Content-type: TEXT/PLAIN; charset=US-ASCII Status: R Lis' suggestion not to be too anthropocentric reminded me of a wonderful idea that was discussed in class last Tuesday. It was that maybe gorillas notice the spot, but just don't get bothered about it. We seem most able to empathize and then impute consciousness to those species that share our vanities. I, too, will become agitated when my teeth start falling out, and would freak out if someone dyed my hair pink while I was sleeping. If a tell-tale sign of self-consciousness is vanity(and we might recall the pre-university, round about middle- to high-school, use of the word 'self-conscious'), then indeed we might be embarassed to put self-consciousness at the top of the hierarchy of cognitive complexity. I wonder not about chimps learning, which is generally considered not to require consciousness, but about chimps teaching. Might not chimps have to empathize with their pupils, believe that the pupils' worldviews are rather akin to their own? Otherwise, it seems chimps couldn't be all that hopeful of success in teaching. I still find Nagel's article to be rather defeatist. First of all there exists an essential otherness to another subject's point of view that prevents me from ever fully comprehending that subject's experience, at least as far as imagination might be a means of getting closer to what it is like for a subject to experience something as that subject. And Nagel thinks this, imagination or "adopting [the subject's] point of view", is most natural way we go about answering the question What is it like...? Imagination gets us part of the way because it is something like experience, which is the means by which we know "what it is like to be us." Indeed, in trying to access the phenomenality of a bat, "The best evidence would come from the experience of bats, if we only knew what they were like." Experience or the imagination of experience, and not explanation, is the most common way of answering What is it like? And although, because in part of a recognition of the constraints on our capacities for experience and imagination, we realize we must turn to explanations in order to answer the question in the case of other subjects, especially those from other species, still we are frustrated by the unfeeling answers of a primitive science. So when we measure them against our expectations of a true and robust answer, we inevitably find objective answers lacking. "If the subjective character of experience is fully comprehensible only from one point of view, then any shift to greater objetivity--that is, less attachment to a specific viewpoint--does not take us nearer to the real nature of the phenomenon: it takes us farther away from it." This last quote of Nagel's is bleak. Moving towards a more objective perspective is wrongheaded, and gaining the subject's point of view is impossible. I think Nagel recognizes that a move towards the objective is wrongheaded only insofar as such a move is presently counterintuitive. But our present intuition is that the only satisfactory answer to What is it like...? is an experience or imagination thereof. If a science of consciousness could explain "_how_" the physical and the mental are linked, maybe one could learn to accept physicalist answers, which should in turn allow robust answers to Nagel's title question, and others like it. Akins' article demonstrates the substantial success one can have in moving towards the objective, and incidentally, as we explored in class, give us a much more accurate idea of what we might try to imagine. But I did have one problem with Akins' article. How are decibels measured? The mustached bat emits beeps and swooping screeches at 100 decibels, "as loud as a rock concert." The jungle must be a terrifying place at night! But how is this supposed to work, since a rock concert can be heard for miles, but a bat can only count on its music carrying a measly six feet? nick