EXPERIENCE AS REPRESENTATION
by
Fred Dretske
That
perceptual beliefs are representational is not much disputed these days. More controversial is the idea that
perceptual experiences are too. Even
more controversial is the claim that perceptual experiences are not only
representational, but that their phenomenal character--the qualities that
determine what it is like to have the experience--are completely given
by the properties the experience represents things to have. That is the thesis I mean to examine (and
defend) here.
In speaking of perceptual experiences as
representational one might only mean that these experiences are (normally) of
things. They possess intentionality
in the philosophers technical sense of that word. We see books, hear bells, and smell garlic. When veridical, then, visual, auditory, and
olfactory experiences are of (or about) books, bells, and garlic. If this is all one means by a representational
theory of experience, it is hard to see how to avoid a representational theory. I mean more than this. I mean that experienced qualities, the way
things phenomenally seem to be (when, for instance, one sees or hallucinates an
orange pumpkin), are--all of them--properties the experience represents
things as having. Since the qualities
objects are represented as having are qualities they sometimes--in fact (given
a modicum of realism) qualities they usually--possess, the features that define
what it is like to have an experience are properties that the objects we
experience (not our experience of them) have.
If qualia are understood (as I understand them) to be qualities that, in
having an experience, one is consciously aware of--those qualities (therefore)
that, from a first person point of view, distinguish one type of experience
from another, then qualia are a subset of objective physical properties.
That
is a mouthful so I'll break it up into more digestible chunks. First a few words about representation.
1. What are Representations?
In
speaking of representations I am guided by familiar instances of (what I call)
conventional representations--instruments, gauges, stories and pictures. Thermometers represent temperature,
speedometers speed. Pictures and
stories represent the objects and events they are pictures of and stories
about. Some representations are
pictorial, others are not. I call such
representations conventional because their power to represent is, in one way or
another, underwritten by our collective purposes and intentions. Change the way we use or regard these
artifacts and you change (perhaps even eliminate) their meaning, what they say
or represent about the rest of the world .
If experiences are representational, they are presumably not
conventional in this way. They are natural
or original (Haugeland 1981)
representations, representations whose power to say how things stand in some
other part of the world is not derived from agents who (by having intentions
and purposes) already possess this power.
Aside from this difference, though, a representational theory holds that
experiences resemble ordinary representations in important respects. If they did not, I would see little point in
calling them representations. What follows
is a brief catalog of these important respects.
There
are representational vehicles--the
object, event, or condition that represents-- and representational contents--the condition or situation the vehicle
represents as being so. In speaking
about representations, then, we must be clear whether we are talking about
content or vehicle, about what is represented or the representation
itself. It makes a big difference. In the case of mental representations, the
vehicle (a belief or an experience) is in the head. Content--what is believed and experienced--is (typically)
not. The same is true with ordinary,
familiar representations. Is the story
in the book? The story-vehicle is, but
not the content. That is why we cannot infer that because there are dragons in
the story and the story is in the book, that dragons are in the book. When I speak of representations I shall
always mean representational vehicle.
If I mean content, I will say so.
Representations
need not be about a particular object (spatio-temporal particular) in order to
have a content. Following fairly
standard causal thinking, I take the object(s) of a representation to be the
object(s) that stand in the right causal relation to it. If there is no object that stands in the
right causal relation to R (the representational vehicle), then R is not about
an object. R nonetheless still has a
content--what it would be saying about an object if it had an object. Think of
radar misrepresenting an aircraft approaching from the east. There is a slowly moving blip that--when
things are working right--is the kind of blip an aircraft produces. In this case, though, no such aircraft
exists. The radar still
"says" there is. What it says
about this non-existent aircraft if that it is approaching from the east. The pattern on the screen is about (or, if
you don't like this language, the pattern says) something in the same way
fictional stories are about (or say) things.
It is about an approaching aircraft in the same way Shakespeare's play, Hamlet,
is about an indecisive Danish prince.[1]
Representations lacking an object have a content fixed by the ways they (mis)represent the world to
be.[2]
On a representational view of experience, the phenomenal character of our
experience is determined not by the objects we experience--these can change or
be non-existent while the experience remains, subjectively, exactly the same
kind of experience--but by the way the experience represents things to be, the
properties it represents objects (if there are any) to have In speaking of content, then, we must always
be understood to mean attributive content, what the representation says or
represents about a possibly non-existing object. As Davies (1996) puts it, the content of a perceptual
representation is not object-involving.
It
is important to distinguish the property an object is represented as having
(call this M) and the property of representing an object to be M, call this Mr. If a perceptual experience represents an
object to be moving, M, then, if the representation is veridical, it is the
object that is M while the experience is Mr. If qualia are
properties one is aware of in having perceptual experiences--the properties
objects (if there are any) phenomenally seem
to have--then it is M, movement, a property (if the experience is veridical) of
the object experienced, not Mr, a property of the experience, that
is the quale.[3] According to a representational view of
perceptual experience, therefore, qualia are properties that physical objects,
the ones we experience, normally have.
They are not properties that experiences have. We are aware of movement, M, not Mr. I have no idea what the property of
representing something to be moving, Mr, looks (smells, feels,
sounds) like. I suspect it doesn't
look, sound, smell, or feel like anything and for roughly the same reason that means
dog (a property of the word "dog") does not look, sound, or
feel like anything. Certainly not like
a dog. Or the word
"dog."
One
last point before getting down to business.
Some people think of all mental representation as essentially
conceptual. To mentally represent x as
M is to subsume x under the concept M, to believe, judge, or think that x is
M. If a person does not know or
understand what M is, does not have the concept M, then neither S nor anything
in S (e.g., a perceptual experience) can represent something as M. You can't experience movement if you don't
understand what movement is.
I
have no quarrel with conceptual representations--I think of perceptual beliefs
that way myself--but it seems clear that if experience is to be understood in
representational terms, the representations cannot be conceptual. Seeing a clock hand move is different from
believing that it is moving. You don't have to see it moving to believe it is
moving, and you can see it moving without believing it is moving. Even birds, bees, and human infants
experience movement, and they don't need the concept movement to do it.
One no more needs the concept movement to experience movement than one
needs numerical concepts to experience multiplicity. An experience of five objects
is different from one of four (six, eight, etc.) objects, and it is different
even for animals (including children)
who do not know how to count and are, therefore, incapable of believing or
judging that there are five objects before them. If one lacks the concept five, one will be unable to
describe the five fingers one sees as looking like five fingers, but we--those
who have the requisite concepts--can surely describe things in this way for
those unable to do it for themselves.
Perhaps they cannot describe themselves as hungry either, but we
can. A representational theory of
experience says that a visual experience can represent the fingers on one's
hand as five (not as four or six)
even when the experience occurs in a person (or animal) lacking the ability to
say or think that this is how things are being experienced.[4]
I'll
not try to say what the difference is between an experiential and a conceptual
representation. I've tried to do that
elsewhere. In Dretske (1981) I made the
distinction in terms of an analog and a digital encoding of information. In Dretske (1995) the same idea was cast in
terms of systemic (phylogenetic origin) and acquired (ontogenetic origin) forms
of representation. But this isn't the
place to develop or review these efforts.
It is enough if we agree that a representational theory of experience
must distinguish, in representational terms, between an experience of an
object's properties--in the case of vision, its movement, color, orientation,
shape, size, texture, and so on--and a judgment (belief, knowledge) that some
object has those properties. If these
are both representations, they are different kinds of representation, and a
representational theory of the mind should, in the final analysis, be able to
account for this difference. That, though, is a project for another time. For now we are interested in the question of
just how plausible it is to think of experience exclusively in representational terms whatever
the final analysis of representation turns out to be.
2. Experiences as Representations.
There is two--maybe (depending on how one counts) even
three--compelling reasons for thinking
of experiences in representational terms.
There is, first, the fact that the properties that individuate
experiences, the ones that distinguish one type of experience from
another--qualia, in short--are not (at least they need not be) qualities of
anything in the head of the experiencer and, therefore, given that experiences
occur in the head, not qualities of the experience. They need not, in fact, be
qualities of anything. Nothing needs to
have the properties we experience (at least not at the time we experience
them). There need be nothing orange and
pumpkin shaped --certainly not in the head at the time the experience is
occurring--for us to have an experience as of an orange pumpkin. How is this possible? The same way it is possible to have stories
(Cinderella, for instance) about (coaches turning back into) pumpkins
without there being anything in the book (vehicle) resembling magical
pumpkins. The properties and situations
one is aware of in having an experience are just like the properties and
situations described in stories: intentional.
The world needn't contain them in order to be represented as containing
them.
This intentional aspect of representation is evident in even such
familiar measuring instruments as speedometers. Nothing need be going 100 mph for my (malfunctioning)
speedometer to represent me as going that fast. Even when the representation is veridical, it (the
representation) need not have the properties it says the vehicle has. Ordinarily, of course, a speedometer
(located in the car whose speed it represents) has the same speed it represents
the car as having, but the police have stationary devices that can represent a
car as going 100 mph. According to a
representational theory, experiences are like that. The representational vehicle, the thing in your head, doesn't (or
needn't) have the properties it represents the world as having. The thing in your head that represents the
object out there as moving needn't itself be moving. That is why looking in a person's (or bat's) head won't reveal
the qualities being experienced by the person (bat) in whose head one looks. What I experience (see) when I look in
another person's head are the representational vehicles--electrical-chemical
events in gray, soggy, brain stuff; what the person experiences (sees), on the
other hand, is representational content--a bright orange pumpkin.
I
don't know of any other theory about the nature of sense experience that tells
this satisfying a story about first vs. third person aspects of
experience. If we agree that
experiences of orange pumpkins exist in the brain of the person seeing the
pumpkins, why can't other people tell what these experiences are like by
looking in the brain, at the experiences, of the person having them? For the same reason I can't tell what story
is being told by looking inside a book written in Chinese. It may, for all I can tell, be about coaches
turning into pumpkins. I can see the
representations clearly enough, but, with no understanding of the code (in this
case, the language), I fail to understand their meaning (content). You can't identify representational content
by looking at the objects (vehicles) that have it.
Since,
as far as I can tell, a representational theory is the only materialistic
theory that accounts for these otherwise puzzling facts about experience, only
a representational theory successfully bridges the explanatory gap. A representational theory of experience
doesn't, I admit, solve the "hard" problem of consciousness. It bridges this explanatory gap only
be opening up an equally puzzling gap somewhere else: how do electrical and
chemical events in gray soggy brain stuff manage to represent bright orange
pumpkins? We all understand how marks on paper--viz., the words "bright
orange pumpkin"--manage to do this.
We give the symbols this power by making them mean bright orange
pumpkin. We could, collectively,
make these words mean something else.
But no one, presumably, gave the events occurring in our brains their
meaning. If they are representations,
they are not, like words, conventional representations of the objects we see and
hear. So where do brains get their power--this original or natural power--to
represent (non-conceptually) the things we experience?
I
have a view about this (see Dretske 1995), but this isn't the place to drag it
out. The problem I am addressing here
is not what gives brains their power to represent the qualities that constitute
experience, but the question of whether a representational story gives a
satisfying account of phenomenal experience.
If it does, then we can turn to the next question: what, then, is a
satisfying account of representation?
But one thing at a time. Let's first be clear about whether there are
aspects of experience, ways we see and feel the world, that cannot be
interpreted as ways the world is represented?
If there is, then even if experience is representational, it is not only
representational. So even if we have an
adequate theory of representation, one capable of marking the difference
between conceptual (belief) and non-conceptual (experience) forms of
representation, we will, in the end, need something else, something more, to
obtain a complete theory of experience.
Before
turning to the question of whether a representational theory provides a
complete account of phenomenal experience, let me finish giving reasons for
thinking it is, at least, part of the story. The second reason for favoring a representational analysis of
experience may be nothing more than a restatement of the first reason, but I
think it gives a sufficiently different slant on things to justify mentioning
it. As we have already seen, qualia, on
a representational view, are a subset of ordinary physical properties, the ones
that objects are represented (in experience) as having. When the representations are veridical, when
things actually are the way they look, objects actually have the properties
they are represented as having. Square
and moving, for instance, are two of the qualia of experiences
(perceptions) of moving squares. These
are properties of the objects we see--the moving square. They are not (or need not) be properties of
our experience of moving squares. Your
experiences of moving squares aren't moving squares.
This
means that the properties we use to individuate experiences are the objective
properties of the objects we experience, not the properties of the experiences
themselves. We distinguish experiences
not in terms of their properties, but
in terms of the properties that their objects (if there are any) have. This sounds remarkably like a
representational mode of classification, remarkably like the way we classify,
say, stories and pictures. A is
a biography of Oscar Wilde, B a
history of the Spanish Civil War. We
put these books on different library shelves not because they--the
books--have such different properties, but because the things they describe are
so very different. It is the same with
pictures. C is a picture of a
Turkish business tycoon, D a picture of the mayor of Chicago. These are different pictures, doubtless to
end up in different photo albums or in different museums, not because they are
so different (they could be indistinguishable; the mayor might be the tycoon's
twin brother), but because what they represent, what they are pictures of, is
so different.
Once
we realize that the properties we use to classify subjective experiences are
not the properties they have but the publicly accessible properties that
external objects (the ones these experiences are normally of) have, it
becomes--or so I think--irresistible to explain this unusual classificatory
procedure in representational terms.
Experiences differ because the objects they are (normally) of differ in
these experientially detectable ways.
Speaking
in favor of a representational account of perceptual experience there is,
finally, the fact that it solves an old and a very troubling philosophical
problem. Although the appeal of this
consideration is limited to realists of a certain stripe, I mention it
anyway. Most people are perceptual
realists. Besides, since solutions to
philosophical problems do not grow on trees, this consideration should carry
some weight.
By
perceptual realism I mean the view that in ordinary perception we are directly
aware of physical objects and events--things that exist independently of our
perception of them. Seeing a tree is
not to be understood as awareness of some mental intermediary (an image,
a sense-datum) having the properties the tree appears to have. Almost everyone, in their unguarded moments,
and outside the philosophy classroom, believes this. Perceptual realism of this sort, however, has always had trouble
with hallucinations. What is it one
sees (experiences) when hallucinating a pink rat? Certainly not a pink rat--everyone agrees about that--but is it,
nonetheless, something that, like a pink rat,
is pink and rat-shaped? Is it
something that exemplifies these properties?
There certainly seems to be something, but, if there is, where is this
pink, rat-shaped, thing? It isn't out
there. Just ask your friends. For materialists it isn't in the head
either. There is nothing pink and
(probably) nothing rat-shaped in the brain of a person hallucinating pink rats
This
difficulty in answering this question has inspired some pretty desperate moves
in philosophy--e.g., adverbial theories of perceptual experience. A representational theory provides a happy
rescue. In hallucinating pink rats we
are aware of something--the properties, pink and rat-shaped that something is
represented as having--but we are not aware of any object that has these properties--a
pink, rat-shaped, object. We are aware
of pure universals, uninstantiated properties.
A representation, remember, doesn't need an object that has the
properties the representation represents something to have. The radar doesn't need an object, external
or internal, much less an airplane, approaching from the east to represent
something at 35,000 feet approaching from the east. Representations are first and foremost representations of
properties--altitude, direction, temperature, pressure, shape, color, size,
hardness, distance, velocity, position and so on. If nothing stands in the right causal relationship to the
representation to qualify as the object being represented, then the
representation, unconnected to a proper object, says that something has
these--as it turns out--uninstantiated properties when nothing, in fact, has
them. There is no need to invent
internal objects to have the properties an unconnected representation
mistakenly attributes to an object.
Just as radar can "hallucinate" an airplane approaching from
the east at 35,000 feet, a visual experience can represent a pink rat in the
corner without there being anything--certainly nothing pink and rat-shaped--it
represents to be that way. There does
not have to be anything X represents to have the property Y for X to represent
there to be something having the property Y.
Problem solved.
3. Problems: some examples.
That is the good news.
Now the bad news. What speaks
against this account of perceptual experience?
One
thing that might (if it were true) tell against a representational theory is if
there were no plausible theory of original or natural representation that could
put some flesh on these bones. If we
can't imagine how electrical-chemical activity in the nervous system could
represent there to be a pink rat eight feet in front of a person, then, despite
the philosophical benefits of thinking of visual (auditory, etc.) experience in
this way, the theory never really gets out of the gate. It would be nice if it were true, yes, but
we can't imagine how it could be true.
There
are, of course, philosophical theories of representation (including my own: see
Dretske 1995) that purport to give a naturalistic account of representation, an
account that describes how biological systems actually perform this wondrous
feat. But there isn't much agreement
(at least not much positive agreement) about the plausibility of these
theories. One of the stumbling blocks
is that any plausible theory of representation (just like any plausible theory
of meaning) is externalistic. It
locates a system's powers of representation in the network of relations
(causal, informational, etiological) the system bears (or bore) to external
affairs. This has the consequence that
representational content (hence, on a representational theory, the quality of
experience) does not supervene on the biology of a system. Biologically
identical organisms (with different histories, say) can be representing their
environments in completely different ways.
On a representational theory of perceptual experience, then, they can be
having much different experiences of the things they see. But how can biologically identical organisms
be having different experiences?[5]
Although many philosophers have been willing to accept this result for belief
(one person believes the liquid he is seeing is H2O, his biological
twin believes it is XYZ), fewer are willing to accept it for a person's
perceptual experience of the liquid.
Whatever difference there may be in their beliefs about the liquid, it
must look the same to both of them.
Perhaps
there are (or will be) plausible theories of representation that (unlike my
own) avoid this consequence by making representational content supervene on the
current, internal, state of the agent.
I don't see how this can be done, but maybe my imagination is too
feeble. Or perhaps (this is my own
view) this (to some) intolerable consequence can be neutralized by a deeper
understanding of exactly what it is (i.e., phenomenal experience) we are giving
a representational theory of. Since I
have already said all that I can usefully say on this topic (Dretske 1995:
Chapter 5; 1996), I leave the matter here without further comment. About these matters--matters of relative
plausibility--people must judge for themselves.
I
turn, instead, to an aspect of this theory that has probably attracted the most
critical attention. This is the idea
that the way things seem (phenomenally) is completely given by the
representational character of the experience.
Once you have said how the experience represents things to be, you have
said everything about the experience that is subjectively accessible to the
person having the experience. As might
be expected from its uncompromising generality, that claim has attracted a
torrent of counterexamples. The
argument is not that there are phenomenal qualities that, as a matter of fact,
the human nervous systems cannot represent.
That would be an argument about the representational capacity of
biological systems, and I do not hear that empirical argument being mounted by
philosophers. The argument, rather, is
that there are certain qualities we experience that cannot be understood as
qualities anything is being represented as having and this for reasons having to
do either with the nature of representation itself or the specific quality in
question. The argument goes like this:
in having experience of type E, things seem F to us; but nothing in us can
represent something to be F; therefore experiences of type E are not (contrary
to representationalism) completely determined by their representational
properties.
Once
the dialectical position is expressed in this way, and once we appreciate the
enormous variety of representational systems (e.g., stories, instruments,
pictures), it is clear that convincing arguments of this form will not be easy
to find. How does one find a property
that nothing in us can represent something as having? Isn't the very act of specifying the property a way of
representing it? Wouldn't a property
that nothing could be represented as having be one that was absolutely
undetectable? If it was detectable,
then we could build an instrument to detect it. This instrument could then represent (possibly misrepresent)
objects as having the property. So the
property, if it is detectable, is a property we can imagine instruments (and if
instruments, why not nervous systems?) representing objects to have. It seems, then, that counterexamples to a
representational theory of experience will be forced to appeal to undetectable
properties as the ones that the theory cannot give an account of.
These
are, I confess, first reactions--intuitions, if you will--of someone committed
to a representational theory of experience.
It is my way of saying that I do not see how a representational theory
of experience can be refuted by
arm-chair philosophy. Once we
are clear about exactly what biological representations are, perhaps it can be
refuted by the scientific facts. But
not a priori reasoning. Still,
it is worth looking at the arguments.
Maybe I'm missing something.
I
won't be looking at all the arguments.
There are too many. I'll pick
and choose. My choices are self-serving, of course, but I think something can
be learned by looking at two examples in particular. They exhibit, each in their own way, a common tendency to
misinterpret the representational story.
Their failure, therefore, is instructive. Whether all counterexamples can be handled as easily as
these is an important question that I won't even try to answer.
In criticizing my (Dretske 1995) representational account
of sense experience Kent Bach (1997:
467) cites the following kind of case:
The most obvious objection to phenomenal
externalism is that there are some phenomenal properties that really are
attributable to experiences themselves.
. . . For example, visual experiences can become blurry, as when one
removes one's glasses, without their objects appearing to have become fuzzy. Their objects look different, of course, but
do not look to have changed.
The objection is that although (in removing one's glasses) one
experiences blurriness, one's experience does not represent the experienced
object as blurry. Hence, there are some
phenomenal properties that are not properties things are represented as
having. These properties, Bach
suggests, are properties of the experience itself.
One
is tempted to ask whether there is supposed to be something in the head (that,
we are assuming, is where the experience occurs) that gets blurry when one
removes one's glasses. Is it this,
the representational vehicle, some part of the visual cortex, then, whose
blurriness one becomes aware of when things look blurry?
Surely
not. Putting on rose-colored glasses
(so that objects look rose colored) does not change the color of things in the
head. The visual cortex doesn't turn
pink. Why, then, should taking off
one's glasses so that the objects look blurry make one aware of something (an
experience in the head) that is blurry. Once we abandon a sense-data theory of
perception, we realize that nothing--neither the object seen nor the
experience of it--need be blurry for objects to look blurry. And once we appreciate that fact, we are
back to a representational theory of experience. Blurry is the way experience represents objects, and you don't
need a blurry representation to represent things as blurry. You can do it, for example, with sharply
printed words.
It
is easy to confuse: (1) properties of a representation with (2) properties the
representation represents the objects (being represented) as having (i.e., the
intentional properties). This is
especially so with pictorial representations.
Imagine two pictures, one a blurry picture (e.g., a photograph taken
with an out-of-focus camera) of a sharply defined object (a block of ice), the
other is a sharply focused picture of a fuzzy-edged object (e.g., a wispy
cloud). The sharp picture of the fuzzy
cloud might resemble the blurry picture of sharp ice. So, if the picture of the
ice is blurry, if blurriness is a property of this representation, then the
sharp picture of the fuzzy cloud, looking much the same, should also be
blurry. But the picture of the cloud is
not at all blurry. Maybe the cloud is
blurry (I don't think so), but the picture certainly isn't. It is perfectly in focus. Blurriness, when applied to pictures, refers
to the way an object is represented. A
picture is blurry if it represents a sharp object as having fuzzy edges--when,
that is, the property an object is represented as having does not correspond to
the property of the object being represented.
That is why the picture of the cloud--though it looks exactly like a
blurry picture--is not blurry. There is
no misrepresentation. The borders of
the object being represented are as fuzzy as they are represented as
being.
When
Bach describes an experience as being blurry he is confusing an intentional
property of a representation--how the experience represents things to be--with
a property of the representation itself.
This is easy to do with blurry pictorial representations since pictorial
images of a sharp object actually have the property (fuzzy edges) they
represent the sharp object to have. So
it is easy to mistake the intentional property for a property of the
representation. No one confuses these
properties in the case of verbal representations. That is because the words
describing something as blurry (e.g., "blurry") need not (like a
picture) have fuzzy edges to do it.
Some
of Peacocke's (1983) examples involving constancy phenomena might well involve
a similar confusion although they are harder to analyze since, in this earlier
work, Peacocke is not distinguishing, as I am doing here (and as he does
later--Peacocke 1992), conceptual from non-conceptual representations. If we set aside this difference as best we
can, though, his examples are worth discussing because they raise deep and
puzzling questions about the nature of
perceptual experience and the phenomenal qualities that define it. I will discuss only one of his examples.
You see two trees, one of them at one hundred
yards (call this one Close Tree), the other (Far Tree) at two hundred
yards. The trees are the same size, and
(constancy mechanisms being what they are) they look to you to be the same
size: that is, "taking your experience at face value you would judge that
the trees are roughly the same physical size." (1983, p. 12) Peacocke concedes that this property (viz.,
same physical size) is a representational property of the experience. It is the way your experience represents the
objects. Nonetheless, Close Tree takes
up more of your visual field, it (as psychologists like to put it) subtends a
greater angle than does Far Tree. This
feature is as much a feature of your experience of the trees as is the fact
that they look to be the same size. In
this sense (relative amount of visual field occupied), then, Close Tree looks bigger
than Far Tree.
The
question is whether one's experience of the trees not only represents Close
Tree as the same size as Far Tree (as a result of the operation of size
constancy mechanisms) but also, in some different (but phenomenologically
accessible) sense, represents Close Tree as larger than Far Tree.[6] I see nothing wrong with saying that the
experience represents both things.
This, of course, is no contradiction.
It is merely the difference between an object-centered (allocentric)
description of the trees (as being the same size) versus a perceiver-centered
(egocentric) description of one as being larger than (i.e., occupying more of
the perceiver's visual field) the other.
The trees are also being represented as being at different distances
from the perceiver, and the way trees of the same size are represented as being
at different distances from the perceiver is by representing them as occupying
different areas of the visual field.
All
this seems straightforward and not only consistent with, but supportive of, a
representational account of perceptual experience. I mention the example, nonetheless, because of my suspicions that
some people will find the example convincing against a representational theory
for the wrong reasons--exactly the kind of reasons that led Bach to confuse
blurry awareness of objects with awareness of blurry objects. The thinking goes like this: in seeing trees
as the same size we are consciously aware not only of the property the trees
are represented as having (same size) but also the non-representational
properties in virtue of which they are seen that way--viz., the comparative
amounts of the visual field the trees occupy.
Close Tree is "larger" than Far Tree in the same way two
pigmented areas on a perspective drawing of the same-sized trees (at different
distances) are of different size. The drawing represents the two trees as being
the same size, and the way it does this is by splotches of pigment of much
different size. In the drawing, Close
Tree is represented by a 2" splotch of green, Far Tree by a 1"
splotch of green. In looking at the
drawing, we are aware of both the property the trees are represented as having
(same size) and the property of the drawing (representation) by means of which
they are represented this way (2" splotch in lower left corner, 1"
splotch in the upper left). It is easy
to confusedly think that our ordinary, unmediated, perception of trees is like
this. We are aware of both the way
(same size) our experience represents the trees and (if we attend to it)
the properties of the representation (relative size of the representations) by
means of which they are represented that way.
That
is the same mistake as in the Bach example.
One misidentifies a represented property--comparative amount of the
visual field the two trees occupy--with a property of the representation. In viewing a photograph of the two trees, I
am aware of a property of the representation (the relative size of the two
splotches of pigment), but in viewing trees I am not. Both the properties I am aware of when I see the trees--both their
comparative size (allocentric property) and distance from me (egocentric
property)--are (relational) properties the trees are represented as having. Unlike viewing a picture, I am not aware of (and do not represent
there to be) two differently sized objects that (together with appropriate placement of these objects in the
picture) represent trees as being the same size. The only objects I am aware of in seeing the trees are the trees
themselves, and they are, and they are represented as being, the same
size. If things are working right, these
objects are also represented as being different distances from me and,
therefore, as a matter of geometry, as necessarily subtending different angles
and (what amounts to the same thing) as occupying different areas of my visual
field. These, however, are all
properties the trees are represented as having, not (as in pictures of
trees) properties of those objects
(splotches of pigment on the picture surface) that represent the trees.
4. Modes of Presentation.
I
conclude this skimpy survey of counterexamples with a brief discussion of modes
of presentation, an idea that enjoys wide appeal but that has no place in a
representational theory of experience.
Ned
Block (1995) invites one to compare a visual experience of a property with an
auditory experience of the same property.
His choice of property is overhead. Hearing a sound as coming from overhead is, he says, different
from seeing something as overhead. The quality of the experience is different
but the same property is being experienced (234). So, he concludes, there is a
phenomenal difference in these experiences that is not traceable to
representational differences. The
property overhead has different modes of presentation--in this case, a
visual and an auditory mode.
Michael
Tye (1995: 157) argues that an auditory, but not a visual, representation of
the property overhead represents loudness. So the phenomenal difference between seeing
something overhead and hearing something overhead may not be a result of the
different way the property overhead is being presented (a
non-representational difference) but a difference in what else besides overhead
is being represented. I suggested the
same in (Dretske 1995). To show that a
modal difference is non-representational it is not enough to show that there is
a phenomenal difference in experiences of the same property. One has to show
that the difference in the experiences are the result of that (and no other)
property. Otherwise the phenomenal difference can be attributed to what else
is being represented in the two modes.
And
how might one do this? It will not be
easy. It will not be easy because even
if one could isolate a property that was experienced in splendid isolation in
two distinct modalities (nothing else was represented in either mode) a
representational theorist could always take refuge in the possibility that
whatever phenomenal differences persist in the two ways of experiencing this
property are to be accounted for in representational terms by a concurrent representation of modal differences. That is to say, in representing F-ness in
mode V (vision) and T (touch), the phenomenal difference in our awareness of
F-ness might be explained as the difference in representing F and (some aspect of) V in the first case
and F and (some aspect of) T in the other. In representing a property, there is--or there may always be--a
representation of the channel over which information about that property is
received. If this were so, then even if
there were phenomenal differences associated with different modes of access to
objective properties, differences in our experience of objective properties
would still be representational differences.
Is
this implausible? I don't think
so. Consider our perception of
movement. We know that whether or not
we sense movement in a perceptual object (whether it appears to be moving in
the phenomenal sense of "appear") depends not simply on what happens
on the retina. It also depends on
information the brain receives and the commands it gives whether or not
executed (Rock 1975: 187) concerning position and movement of the eyes. If you fixate a stationary object you sense
no movement. If you track a moving
object--thereby keeping the retinal image immobile (as immobile as when you
fixate a stationary object)--you sense movement. So the experience of movement depends on information the visual
system has about itself. On a
representational view of experience this means that the quality of experience
is given not just by information about what is happening on the retina (and
points further out), but also by what is happening in the perceptual system
itself (points further in). This being
so, can't we suppose that modal differences (the alleged differences in the way
properties are presented in two sensory modes) are really
representational differences--differences in what else the perceptual
systems represent about themselves?
I
do not know how plausible this is.
Maybe it is far-fetched. Still,
the possibility is worth mentioning if only to indicate how hard it is to find
effective counterexamples to a representational account of experience. The only thing that is impossible according
to a representational theory are phenomenal differences with no representational differences. As long
as there exist modal differences, though, a representational theorist can take
refuge in the idea that, perhaps, different ways of gaining access to the world
are themselves represented in our experience of the world.
REFERENCES
Bach, K. 1997. Engineering the mind. Review of Naturalizing the Mind by
Fred Dretske. Philosophy and
Phenomenological Research, LVII.2 (June 1997), pp. 459-468.
Block, N. 1995. On a confusion about a
function of consciousness. In Behavioral
and Brain Sciences 18: 227-287.
Block, N. 1997. Author's Response. In Behavioral and Brain Sciences 20:
1. 159-166.
Davies, M. 1996. Externalism and Experience. Philosophy and Cognitive Science:
Categories, Consciousness, and Reasoning.
Eds. A. Clark, J. Ezquerro, and J.M. Larrazabai. Dordrecht: Kluwer Academic Publishers.
Dretske, F. 1981. Knowledge and the Flow of Information. Cambridge, MA; MIT Press.
Dretske, F. 1995. Naturalizing the
Mind. Cambridge, MA; MIT Press.
Dretske, F. 1996. Phenomenal
Externalism: If Meanings ain't in the Head, Where are Qualia? Philosophical Issues, 7 (1996),
Enrique Villanueva, ed. Ridgeview
Publishing Company; Atascadero, CA. Pp.
143-158.
Harman, G. 1989. Some philosophical issues in cognitive
science: qualia, intentionality, and the mind-body problem. In Foundations of Cognitive Science,
Michael Posner, ed. Cambridge, MA; MIT
Press. 831-848.
Harman, G. 1990. The intrinsic quality of
experience. In Philosophical
Perspectives, 4: Action Theory and Philosophy of Mind, James Tomberlin,
ed., Atascadero, CA; Ridgeview Publishing Co. Pp. 31-52
Haugeland, J. 1981. Mind Design.
Cambridge, MA; MIT Press
Peacocke, C. 1983. Sense and Content:
Experience, Thought, and their Relation. Oxford: Clarendon Press
Peacocke, C. 1992. A Study of
Concepts. Cambridge, MA; MIT Press.
Rock, I. 1975. An Introduction to
Perception. New York; Macmillan.
Tye, M. 1995a. Blindsight, orgasm, and
representational overlap. In Behavioral
and Brain Sciences 18, pp. 268-269.
Tye, M.
1995b. Ten Problems of
Conscioiusness. Cambridge, MA; MIT
Press.
ENDNOTES
[1] We must be careful to distinguish
misrepresentation from
non-representation. A measuring
instrument turned on that registers 0 units to a condition of 5 units misrepresents
this condition. Had it been turned off,
it would not have represented (hence, misrepresented) anything despite
registering the same value (0).
[2]In saying that Hamlet depicts the world as containing a Danish prince, I do not
mean to suggest that in doing so it necessarily (though it might) misrepresent
the world this way. Not all false
representations are misrepresentations. Caricatures and (deliberate) fiction
isn't.
[3] Harman (1989, 1990) makes essentially
these same points. I
nonetheless depart from Harman, a fellow representationalist, who takes
experience of quality M to be essentially a belief that something is M. I take experience to be a non-conceptual
representation of the properties being experienced. According to my lights, Block (1997, p. 164) is right in
distinguishing experiences of M from recognizing
(i.e., conceptually representing) something as M.
[4] In an earlier work, Chris Peacocke
(1983, p. 7) limits the representational content of experience to properties
the subject has the concept of. It is,
he says (p. 19), a conceptual truth that one cannot have an experience with a
given representational content unless one possesses the concepts from which that content is built up. Limiting experiences in this way he builds a
strong case for a non-representational (sensational) aspect of experience. Later, however, Peacocke (1992) develops the
notion of scenario content, a type of
non-conceptual (but nonetheless representational) content possessed by
experiences.
[5] I have found that some philosophers take
this to be a denial of materialism. A
little reflection, though, shows only that it is a denial of an extremely naïve
form of materialism. A materialists
needn't suppose that just because two pieces of paper are physically indistinguishable,
they must, if they are pictures of something, be pictures of the same
thing. They might be photographs of
twins or of different (but identical looking) paper clips. All that this (differences that do not
supervene on the current material condition of objects) shows is that some
perfectly respectable physical properties--and this includes being a picture of
X--are relational properties.
[6] Peacocke argues that the comparative amount of the visual
field occupied by the two trees is not a relation between them that they are
represented as having because this relationship exists even for people who do
not have the concepts of visual field, area, and so on (those concepts that,
according to Peacocke, are needed for mental representation of any kind). This is not an argument that he would any
longer accept and so I skip over it here.