DATE:
February 26, 1997
FILE:
reading.doc
DIRECTORY:
kermit
The Explanation of Cognition
By John R. Searle .
I. The Problem
What sorts of systematic explanations
should we and can we seek in cognitive
science for perception, language
comprehension, rational action, and other forms of cognition? In broad outline
I think the answer is reasonably clear: We are looking for causal explanations, and our subject matter is
certain functions of a biological organ, the human and animal brain.
As with any other natural science there
are certain assumptions we have to make and certain conditions that our
explanations have to meet. Specifically we have to suppose that there
exists a reality totally independent of our representations of it (in a
healthier intellectual era it would not be necessary to say that) and we have
to suppose that the elements of that reality that we cite in our explanations
genuinely function causally.
Not all functions of the brain are
relevant to cognition, so we have to be careful to restrict the range of brain
functions we are discussing. Cognitive
science is about the cognitive functioning of the brain and its relation
to the rest of the organism and to the rest of the world in the way that nutrition science is about the digestive functioning of the digestive system and
its relation to the rest of the
organism and the rest of the world.
Like other organs, and indeed like other physical systems, the brain has
different levels of description and cognitive science is appropriately
concerned with any level of description of the brain that is relevant to the
causal explanation of cognition. These
can range from conscious processes of decision making, at the top level,
to the molecular structure of neurotransmitters, at the bottom.
Typically, the higher levels will be
causally emergent properties of the behavior and organization of the elements
of the brain at the lower levels.
Consider an obvious, common sense example of an explanation at one of these higher
levels. If I explain my driving behavior in Britain by saying I am following the rule, "Drive on the
left" I have given a genuine
causal explanation by citing a mental process. The operation of the rule
is itself caused by lower level neuronal events in the brain and is realized in
the brain at a level higher than that
of individual neurons. In what I hope
is an unmysterious sense of "emergent property" the operation of the
rule in producing my behavior is a causally emergent property of the brain
system. Another way to put this same point is to say: we can give genuine
causal explanations that are not at the bottom level, not at the level of
neurons, etc., because the higher
levels of explanation are also real levels. Talk of them is not just a manner
of speaking or a metaphor. In order to
be a real level, a putative causal level
has to be appropriately related to the more fundamental levels, for example by being a causally emergent property
of those levels. Let us call this
constraint, namely that in explaining cognition we have to cite real features
of the real world which function causally, the
causal reality constraint.
So, just to summarize these constraints, we
are seeking causal explanations of brain
functioning at different levels of description. We allow ourselves complete freedom in
talking about different levels of description, but that freedom is constrained by the requirement
that the levels be causally real.
The claim I want to defend in this talk is that some, though of course
not all, of the explanatory models in
cognitive science fail to meet the causal reality constraint. I will also suggest some revisions that will enable the
explanations to meet that constraint.
II. Marr's Version of the Information
Processing Model
My dog, Ludwig, is very good at catching
tennis balls. For example, if you bounce a tennis ball off a wall, he is
usually able to leap up and put his mouth at precisely the point the ball
reaches as he grasps it in his teeth.
He doesn't always succeed, but he is pretty good at it. How does he do
it?
According to the current explanatory
models in cognitive science, Ludwig performs
an information processing task of enormous complexity. He takes in
information in the form of a 2D pattern on his retina, processes it through
the visual system until he produces a
3D representation of the external world, and inputs that representation into
the motor output system. The
computation he is performing, even for the motor output module, is no trivial matter. Here is a candidate for the first
formulation of the algorithm. Ludwig is unconsciously following the rule: Jump
in such a way that the plane of the angle of reflection of the ball is exactly
equal to the plane of the angle of incidence of impact, and put your mouth at a
point where the ball is in a parabolic arc, the flatness of whose trajectory
and whose velocity is a function of impact velocity times the coefficient
of elasticity of the tennis ball, minus
a certain loss due to air friction.
That is, on the standard computational model of cognition, Ludwig
unconsciously computes a large number
of such functions by
unconsciously doing a lot of mathematics.
In form, the explanation of his behavior is just like that
of the person who follows the rule
"drive on the left" except for
the fact that there is no way even in principle that he could become
consciously aware of the operation of the rule. The rules are not just not present to consciousness in fact, they
are not even the sort of rules he could become aware of following. They are
what I have called "deep unconscious" rules.[1]
I have never been completely satisfied
with this mode of explanation. The
problem is not just that it attributes an awful lot of unconscious mathematical knowledge to
Ludwig's doggy brain, but more
importantly, it leaves out the crucial element that Ludwig is a conscious rational agent trying to do
something. The explanatory model seems more appropriate for someone building a machine, a robot
canine, that would catch tennis balls. I think in fact that the intuitive
appeal of the approach is that it would predict Ludwig's behavior and it is the
sort of information we would put into a robot if we were building a robot to
simulate his behavior.
So let us probe a bit deeper into the
assumptions behind this approach.
The classic statement of this version of
the cognitive science explanatory
paradigm is due to David Marr[1],
but there are equivalent views in other authors. On this paradigm cognitive
science is a special kind of
information processing science. We are interested in how the brain and
other functionally equivalent systems, such as certain kinds of computers,
process information. There are three
levels of explanation. The highest is the computational level, and this Marr
defines as "the informational constraints available for mapping input
information to output information."[1] In Ludwig's case the
computational task of his brain is to take in information about a two
dimensional visual array and output representations of muscle contractions that will get his mouth
and the tennis ball at the same place at the same time.
Intuitively I think Marr's idea of the computational level is clear. If you were instructing a computer
programmer to design a program the first thing you would tell him is what job you want the program to do. And the
statement of that job is a statement of the computational task to be performed
at the computational level.
How is it done? Well that leads to the
second level, which Marr calls the algorithmic level. The idea is this. Any
computational task can be performed in different ways. The intuitive idea is that the algorithmic level
determines how the computational task
is performed by a specific
algorithm. In a computer we would
think of the algorithmic level as the level of the program.
One puzzling feature of cognitive
science versions of this level is the doctrine of recursive decomposition.
Complex levels decompose into simpler levels until the bottom level is reached
and at that level it is all a matter of
zeroes and ones, or some other binary symbols.
That is, there is not really a single intermediate algorithmic level but rather a series of nested levels
that bottom out in primitive processors, and these are binary symbols. And the
bottom level is he only one that is real. All the others are reducible to it.
But even it has no physical reality. It
is implemented in the physics as we will see, but the alogorithmic level makes no reference to physical
processes .
I used to think that the computation I
gave for Ludwig might be his algorithm, but not so on this model. All Ludwig is
really doing is manipulating zeroes and ones.
All the rest is mere appearance.
The bottom level for Marr is the level
of implementation, how the algorithm is
actually implemented in specific hardware. The same program, the same
algorithm, can be implemented in an indefinite range of different hardwares,
and it is quite possible, for
example, that a program implemented in Ludwig's brain
might also be implemented on a
commercial computer.
So, on Marr's tripartite model you get the following picture. Cognitive science
is essentially the science of
information processing and it is primarily concerned with explaining the top level by the algorithmic
level. What matters for cognitive
science explanation is the intermediate level. Why? Why should we explain brains at the intermediate level and not
at the hardware level? The answer is
given by my initial characterization
of the brain as a functional system. Where other functional
systems are concerned, such as cars, thermostats, clocks, and carburetors we are interested in how the
function is performed at the level of function, not at the level of microstructure. Thus in explaining a car
engine we speak of pistons and cylinders and not of the subatomic particles of
which the engine is composed; because,
roughly speaking, any old subatomic particles will do as long as they implement the pistons and the
cylinders. In Ludwig's case we are
interested in the unconscious rule he is
actually following and not in the neuronal implementation of that rule
following behavior. And the rule he is actually following must be statable
entirely in terms of zeroes and ones,
because that is all that is really going on. So on this conception my earlier
characterization of cognitive science as a science of brain function at a certain level or levels of description was
misleading. Cognitive science is a
science of information processing,
which happens to be implemented in the brain but which could equally well be implemented in an indefinite range of other hardwares. Cognitive science explains the
top level in terms of the intermediate
level but is not really concerned with the bottom level except insofar as it
implements the intermediate level.
One problem with Marr's tripartite
analysis of cognitive functionalism is that just about any system will admit of
a this style of analysis. And the point
is not just that clocks, carburetors, and thermostats admit of the three level
analysis, (this is welcomed by adherents of the classical model as showing that
cognition admits of a functional analysis similar to that of clocks, etc.)
The problem is that any system of any complexity at all admits of an information processing analysis.
Consider a stone falling off the top of
a cliff. The "system", if I may so describe it, has three levels of
analysis. The computational task for
the stone is to figure out a way
to get to the ground in a certain
amount of time. It has to compute the
function S=1/2 gt\u2\d. At the intermediate level there is an algorithm
that carries out the task. The algorithm
instructs the system as to what steps to go through to match time and
space in the right way. And there is
the familiar hardware implementation in
terms of masses of rock, earth and intervening air. So why isn't the falling stone an information processing system?
But if the stone is, then everything is.
This is a crucial question for cognitive
science and several authors have
answered it. According to them, we need
to distinguish between a system being describable by a computation and one actually carrying out
the computation. The system just
mentioned is describable by a computable function, but it does not
carry out that computation
because (a) there are no representations for the computation to operate over and (b) a fortiori there is no
information encoded in the representations.
A genuine science of cognition, an
information processing science requires computations performed over symbols or other syntactical elements,
and these are the representations which encode information which is processed
by the algorithm. These conditions are
not met by a falling rock, even though
the rock is computationally describable.
If we are going to make this reply
stick, we will need a satisfactory
definition or account of "information",
"representation".
"symbol", "syntax" ,not to mention
"computation" and "algorithm". And these accounts must
enable us to explain how information,
representation, etc., gets into the system in such a way as to satisfy
the causal reality constraint. The
account will have to show how information gets into the system in some
intrinsic form in the first place, and
then retains its character as information throughout the processing. Furthermore the account would have to show
how the real information processing level is an emergent property of the more
fundamental micro-levels. To nail this down to specific cases, it is not going
to be enough to say, as Marr did, that there is a two dimensional visual array
on the retina as an input to the system, we now have to say what fact about that visual array makes it
information, and what exactly the content of the information is.
I have looked at a lot of the literature
on this issue, and I cannot find a satisfactory definition of representation
and information and the other notions
that will solve our problems. To their credit, Palmer and Kimchi[1]
admit that they have not the faintest
idea what information, in their sense, might be. I want to explore the
notion of information a little more fully. The basic question of this paper is:
can we give any empirical sense to the basic concepts of the information processing model that would make the information processing version of cognitive science
a legitimate empirical discipline?
III. Following a Rule.
If we are going to be clear about the
claim that the cognitive agent is following unconscious rules we first have to
understand what is involved in rule
following behavior in the first place.
Consider a case where it
seems clear and unproblematic that the
agent is following a rule. When I drive
in England, I follow the rule: Drive on the left hand side of the
road. And if I stay in England for any length of time, I find that I get used
to driving on the left and I don't have to think consciously of the rule. But it seems natural to say that I am still
following the rule even when I am not thinking about it. Such an explanation
meets the causal reality constraint. When I say that I am following a rule I am saying that there is an intrinsic
intentional content in me, the semantic content of the rule, that is functioning causally to produce my
behavior. That intentional content is at an emergent level of brain processing.
The rule has the world-to-rule
direction of fit and the rule-to-world direction of causation.
I want to explore some of the features
of this type of explanation to see whether they can be preserved in information
processing cognitive science. I will
simply list what seem to me the most important features of rule following behavior:
1. The most important feature is the one I
just mentioned. The intentional content
of the rule must function causally in the production of the behavior in question.
To do this it must be an emergent level of brain functioning. This is what I
have been calling the causal reality constraint. Any rule following explanation
in cognitive science has to meet that
constraint.
2. Rule following is normative from the point
of view of the agent. The content of
the rule determines for the agent what
counts as right and wrong, as succeeding or failing.
3. The next feature is a consequence of the
first. The rule must have a certain
aspectual shape, what Frege called the mode of presentation. This is why
extensionally equivalent rules can differ in their explanatory force. I can be
following one rule and not another, even though the observable behavior is the
same for both cases. For this reason,
that rule explanation must employ to specific aspectual shapes, rule
explanations are intensional with an s.
For example, the rule "Drive on the left on two lane roads" Is
extensionally equivalent to the rule "Drive so that the steering wheel is
nearest to the center line of the road", given the structure of British
cars, but in Britain I follow the first rule and not the second, even though each would equally well
predict my behavior.
4. In ordinary rule governed behavior the
rules are either conscious or accessible to consciousness. Even when I am following the rule unthinkingly, still I could think about it.
I am not always conscious of the rule, but
I can easily become conscious of it.
Even if the role is so engrained in my unconscious that I cannot think of it, still it must be the sort of
thing that could be conscious.
5. Accessibility to consciousness implies a
fifth requirement. The terms in which the rule is stated, must
be terms that are in the cognitive repertoire of the agent in question. It is a general characteristic of
intentionalistic explanations, of which rule explanations are a special case,
that the apparatus appealed to by the rule must be one that the agent is in
possession of. If I wish to explain why
Hitler invaded Russia, I have to use terms that are part of Hitler's conceptual
repertoire. If I postulate some
mathematical formula that Hitler never heard of and couldn't have mastered, and
couldn't have been aware of, then the explanation cannot be an intentionalistic
explanation. It is a peculiarity of cognition, often remarked on by people who discuss the special features of
historical explanation, that the
explanation must employ concepts
available to the agent.
6. The next feature is seldom remarked on:
Rule following is normally a form of voluntary behavior. It is up to me whether I follow the rule or
break it. The rule does indeed function causally but the rule as cause, even
the rule together with a desire to follow the rule do not give causally sufficient conditions.
This is typical of rational explanations
of behavior. It is often said that
actions are caused by beliefs and desires, but if we take that to be a claim
about causally sufficient conditions, it is false. A test of the rationality of
the behavior is that there is a gap between the intentional contents - beliefs
desires, awareness of rules etc. and the actual action. You still have to haul off and do the thing you have decided
to do, even in cases where the rule
requires you to do it. I am going to call this gap between the rule and other
intentional phenomena which are the
causes and the action which is
their effect, the "gap of voluntary action" or simply "the
gap".
7. A feature, related to the gap, is that
rules are always subject to interpretation and to interference by other
rational considerations. So, for example, I don't follow the rule, drive on the
lefthand side of the road, blindly. If
there is a pothole, or a car parked
blocking the road, I will swing around it. Such rules are in this sense ceteris
paribus rules.
8.
The final feature is that the rule must operate in real time. For actual
rule governed behavior, the rule explanation requires that the time of the application of the rule and the time of the causal functioning are co-extensive.
Just to summarize, then, we have eight
features of intentionalistic rule explanations. First, the intentional content of the rule must function
causally; second, the rule sets a
normative standard for the agent. Third, rules have aspectual shape and so rule
explanations are intensional-with-an-s.
Fourth, the rule must be either conscious or accessible to consciousness,
Fifth, rules must have semantic contents in the cognitive repertoire of the
agent. Sixth, rule governed behavior is
voluntary, so the rule explanation does not give a causally sufficient
conditions. Seventh, rules are subject
to interpretation and to interference by other considerations. And finally,
eighth, the rule must operate in real time.
Let's compare this with Marr style
cognitivist forms of explanation. In
such explanations only features 1 and 3 are unambiguously present. Now one
problem with the causal reality constraint on cognitive science explanations is
that it is not clear how you can have
those two without any of the other six. It is no accident that these
features hang together, because rule following explanations are typical of
intentionalistic explanations of rational behavior. How can it be literally the
case that Ludwig is following a rule with a specific semantic content, if that
rule is not normative for him, is not
accessible to his consciousness even in principle, has concepts totally outside
his repertoire, is not voluntarily applicable,
and is not subject to interpretation and appears to operate instantaneously rather than in real time?
IV. Some Preliminary Distinctions.
In this section I want to remind you of
certain fundamental distinctions. First we need to recall the familiar
distinction between rule governed or rule guided behavior, on the one hand, and
rule described behavior,[1]
on the other. When I follow a rule, such as the rule of the road in England,
drive on the left hand side, the actual
semantic content of the rule plays a causal role in my behavior. The rule does more than predict my behavior, rather
it is part of the cause of my behavior. In this respect it differs from the laws of nature which describe my
behavior including its causes, but the
laws of nature do not cause the behavior they describe. The distinction between
rule guided and rule described can be generalized as a distinction between intentionality guided and intentionality described . All descriptions have intentionality, but
the peculiarity of intentionalistic explanations of human cognition
is that the intentional content of the explanation functions causally in
the production of the explanandum. If I
say, ``Sally drank because she was thirsty''
the thirst functions causally in the production of the behavior. It is
important to remind ourselves of this distinction because if an information
processing cognitive science is to meet the
constraint, the intentionality of the information must not merely
describe but must function causally in the production of the cognition that the information
processing explains. Otherwise there is no causal explanation. To meet the
causal reality constraint, the algorithmic level must function causally.
I believe that the standard cognitive
science accounts acknowledge this point when they distinguish between being
describable by a function and actually
computing a function. This a special case of
the general distinction between rule described and rule guided.
The second important distinction is
between observer-relative and observer-independent features of reality. Basic
to our whole scientific world view is the distinction between those features
that exist independently of any observer, designer or other intentionalistic
agent and those that are dependent on observers, users, etc. Often the same object will have both sorts
of features. The objects in my pocket
have such observer independent features as a certain mass and a certain
chemical composition, but they also
have observer relative features: For example,
one is a British Ten Pound Note and another is a Swiss Army
knife. I want to label this the
distinction between features of
the world that are observer (or intentionality) relative, and features that are observer (or intentionality) independent. Money, property, marriage, government and correct English pronunciation
as well as knives, bathtubs and motor cars are observer relative; force, mass,
and gravitational attraction are observer independent.
"Observer relative" does not
mean arbitrary or unreal. The fact that something is a knife or a chair or a
nice day for a picnic is observer relative but it is not arbitrary. You can't
use just anything as a knife or a chair or a nice day for a picnic. The point
about observer relativity is that
observer relative features, under
those descriptions, only exist relative
to human observers. The fact that this
object in my hand has a certain mass is not observer relative but observer
independent. That the same object is a knife is relative to the fact that human
agents have designed it, sold it, used it, etc. as a knife. Same object, different features: some features observer independent, some observer relative.
It is characteristic of the natural
sciences that they deal with observer independent features -- such as force,
mass, the chemical bond, etc. -- and it is
characteristic of the social sciences that they deal with observer relative features, such as money,
property, marriage and government. As
usual, psychology falls in the middle.
Some parts of psychology deal with observer relative features, but
cognitive psychology, the part that is the core of cognitive science, deals
with observer independent features such as perception and memory.
Wherever there is an observer relative
feature, such as being a knife or being
money, there must be some agents who use or treat the entities in question as a knives or as money. Now, and this is an important point, though
money and knives are observer relative,
the fact that observers treat certain objects as money or knives is not
observer relative, it is observer
independent. It is an intrinsic fact about me that I treat this object is a
knife, even though the fact that this object is a knife only exists relative to me and other observers. The attitudes of observers relative to which
entities satisfy observer relative
descriptions are not themselves observer relative.
This is why social science explanations
can satisfy the causal reality
constraint even though the features appealed to are observer relative
features. So for example, if I say
"The rise in American interest rates caused a rise in the exchange value
of the dollar against the pound" that is a perfectly legitimate causal
explanation, even though pounds, dollars and interest rates are all observer
relative. The causal mechanisms work in such an explanation even though they
work through the attitudes of investors, bankers, money changers, speculators,
etc. In that respect the rise in the value of the dollar is not like the rise
in the pressure of a gas when heated.
The rise in pressure of a gas is observer independent, the rise in the value of the dollar is observer
dependent. But the explanation in both
cases can be a causal explanation. The difference comes out in the fact that the explanation of the
observer relative phenomena makes implicit reference to human agents.
The third distinction is an application
of the second. It is the distinction
between intrinsic or original intentionality and derived intentionality. If I am currently in a state of thirst or
hunger, the intentionality of my state is intrinsic to those states---both
involve desires. If I report these
states in the utterances of sentences such as ``I am thirsty'' or ``I am hungry''
the sentences are also intentional because they have truth conditions. But the intentionality of the sentences is
not intrinsic to them as syntactical sequences. Those sentences derive their meaning from the intentionality of English speakers. Mental states such as beliefs, desires, emotions,
perceptions, etc., have intrinsic intentionality; but sentences, maps, pictures and books have only derived
intentionality. In both cases, the
intentionality is real and literally ascribed, but the derived intentionality
has to be derived from the original or intrinsic intentionality of actual human
or animal agents.
I want this distinction to sound
obvious, because I believe it is. And I also believe it is a special case of
the equally obvious distinction between observer relativity and observer
independence. Derived intentionality is observer relative, intrinsic
intentionality is observer independent.
There are, furthermore, intentional
ascriptions that do not ascribe either. These are typically metaphorical or
as-if ascriptions. We say such things as ``My lawn is thirsty because we are in
a drought,'' or ``My car is thirsty because it consumes so much gasoline.'' I
take it that these are harmless metaphorical claims of little philosophical
interest. They mean, roughly, my lawn or my car is in a situation similar to
and behaves like an organism that is literally thirsty.
Derived intentionality should not be
confused with as-if intentionality.
Derived intentionality is genuine intentionality all right but it is derived
from the intrinsic intentionality of actual
intentional agents such as speakers of a language. Hence, it is observer
relative. But as-if intentionality is not intentionality at all. When I say of a system that it has as-if intentionality,
that does not attribute intentionality to it. It merely says that the system
behaves as if it had intentionality,
even though it does not in fact.
To summarize these distinctions, we need
to distinguish between rule guided and rule described behavior. We need to distinguish
observer independent features from observer relative features. Furthermore,
we need to distinguish observer
independent (or intrinsic) intentionality, from both observer dependent
(derived) intentionality and as-if
intentionality. .
V. Information and Interpretation
I now want to apply these distinctions
to the information processing model of
cognitive explanation. I will argue
that if the Marr style model is to have explanatory force, the behavior to be
explained by the information processing rules must be rule guided and not just
rule described. It can only meet that
condition if the information is intrinsic or
observer independent. To make
the distinction between Ludwig and the falling rock, we have to show that Ludwig
is actually following a rule and he can
only do that because he has an appropriate intrinsic intentional
content. The difficulty with the classical model can now be stated in a preliminary form. Every key notion in
the model is observer relative: information, representation, syntax, symbol
, and computation as used in cognitive science are all
observer relative notions. This has the consequence that the classical model in
its present form cannot meet the causal
reality constraint. I will try to state this more precisely in what
follows.
Let us go through these notions,
starting with "symbols" and "syntax". I take it as obvious
that a mark or a shape or a sound is a symbol or a sentence or other
syntactical device only exists as such relative to some agents who assign a
syntactical interpretation to it. And
indeed, though this is less obvious, I think it is also true that an entity can only have a syntactical
interpretation if it also has a semantic interpretation, because the symbols
and marks are syntactical elements only relative to the meaning they have. Symbols have to symbolize something
and sentences have to mean something. Symbols and sentences are indeed
syntactical entities but the
syntactical interpretation requires a semantics.
When we get to
"representation" the situation is little bit trickier. A
representation can be either observer relative or observer independent. Thus maps, diagrams, pictures
and sentences are all representations and they are all observer relative.
Beliefs and desires are mental
representations and they are observer independent. Furthermore an animal can
have such mental representation as beliefs or desires without having any
syntactical or symbolic entities at all. When Ludwig wants to eat or wants to
drink, for example, he need not use any symbols or sentences at all to have his
canine desires. He just feels hungry or
thirsty. The tricky part comes from the
fact that sometimes observer
independent beliefs and desires make
use of sentences, etc. which are observer relative.
Indeed some philosophers have said that all
beliefs and desires are "propositional attitudes" in the sense of
being attitudes towards propositions or sentences or some other form of
representation. I used to think this was a harmless mistake, but it is not. If
I believe that Clinton is President of
the U.S. I do indeed have an attitude toward Clinton, but not toward a sentence
or a proposition. The sentence
"Clinton is President of the U.S." is used to express my
belief and the proposition that Clinton
is President of the U.S. is the content of my belief. But I have no
attitudes toward the sentence or
proposition. Indeed, the proposition
construed as believed just is identical
with my belief. It is not the object of the belief.
The doctrine of propositional attitudes
is a harmful mistake because it leads people to postulate a set of entities in
the head, mental representations, and having a belief or desire is supposed to
be having an attitude toward one of
these symbolic, sentence like entities.
The point for present purposes is that intrinsic mental representations such as
beliefs and desires (intentional states, as I prefer to call them) do not
require some representing device, some syntactical device, in order that they
exist. And where there is a syntactical
device, the syntactical device being observer dependent inherits its status
as syntactical and semantic from the
intrinsic intentional content of the
mind and not conversely. The crucial point for the present discussion is
that all syntactical entities are observer relative.
This distinction between observer
independent and observer dependent applies to information. Information is
clearly an intentionalistic notion,
because information is always information
about something and typically
the information is: that such and such is the case. Aboutness in this sense is the defining quality of
intentionality, and intentional content of this propositional sort is typical
of intentionality. So it should not be
surprising that the distinctions
between the different kinds of
intentional ascriptions will apply to information. Thus if I say,
"I know the way to San Jose", I ascribe to myself information which
does not depend on any observer. It is intrinsic or observer independent. If I
say "This book contains information about how to get to San Jose", the book literally contains information,
but the interpretation of the
inscriptions in the book as information depends on interpretors. The information is observer dependent.
There are also as-if ascriptions of information. If I say ``These tree rings
contain information about the age of
the tree,'' that is not a literal ascription of intentionality. There is no
propositional content expressed by the wood. What I am actually saying, stated
literally, is that a knowledgeable person can infer the age of the tree from
the number of rings, because there is an exact covariance between the number of
the tree's rings and its age in years. I think that with the widespread use of
the notion of "information", particularly as a result of information
theory, many people would now say that the stump literally contains
information. I think they think they are speaking literally when they say that
DNA contains information. This is
perfectly reasonable, but it is a
different meaning of
"information". It is a meaning that separates information from
intentionality. There is no psychological reality to the "information" in the tree rings or
the DNA. They have neither propositional content nor intentionality in the sense in which the thoughts in my head
have original intentionality and the sentences in the book have derived
intentionality.
Of these three types of intentional
ascription only intrinsic information is
observer independent.
Which type of information is appealed to
in cognitive science information
processing theories? Well, "as-if" information won't do. If the
explanation is to satisfy the causal reality constraint some actual informational fact must be
appealed to. Why won't derived
information satisfy the reality constraint? After all we can give genuine
scientific accounts of the flow of
money in the economy, why not scientific accounts of the flow of information in the cognitive system, even
though the information, like the money, is an observer relative phenomenon. The
brief answer to that is that in the
case of economics the agents who treat such and such physical phenomena as money are parts of the subject matter
we are studying. But in cognitive
science, if we say we are giving an
information processing explanation of
the agent's cognitive processes we cannot accept an explanation in which the agent's information processing only exists relative to his intentionality,
because we then have not explained the
intentionality on which all of his cognitive
processes depend. We will in short have committed the homunculus fallacy. If, on the other hand, we think of the
information as existing relative only to us---the observer---then we have not
satisfied the causal reality constraint because we have not identified a fact
independent of the theory which explains the data that the theory is supposed
to explain. So if cognitive science
explanations are going to satisfy the causal reality constraint they are going to have to appeal to information which is intrinsic in the
agent, information that is
observer independent.
VI. Computation and Interpretation.
Well why must the requirement be so
strong? Why can't we just say that the brain behaves like any other
computer? We give causal explanations
of ordinary computers, explanations which meet the causal reality constraint
but which do not force us to postulate intrinsic intentionality in the
computer.
The answer is that the distinction
between observer independent and
observer relative applies to computation as well. When I add 2 plus 2 and
get 4 the arithmetical calculation is intrinsic to me. It is observer
independent. When I punch out "2 +
2" and get "4" on my computer, the computation is observer
relative. The electrical state transitions are just electrical state
transitions until an interpreter interprets them as a computation. The computation is not intrinsic to the silicon nor to the electric charges.
I, and others like me, are the computer's homunculi. So if we say that the brain is doing computation
we need to say whether the computation is observer relative or observer
independent. If it is observer independent
then we have to postulate a homunculus inside the brain who is actually manipulating the symbols so as to
carry out the computation, just as I am consciously manipulating arabic
numerals when I add 2 plus 2 to get 4. If we say it is observer relative then
we are supposing that some outside observer is assigning a computational interpretation to the neuron
firings.
I think this last point is clear if you
think about it, but not everyone finds
it so and I will therefore explore it a bit further. We are blinded to the
observer-relativity of computational
ascription because we think that since computation is typically
mathematical and since we also think that the world satisfies certain
mathematical descriptions in an observer-independent fashion, that somehow it
must follow that the computation is observer-independent. However, there is a subtle but still
important distinction between the observer independence of certain
mathematically described facts and the observer relativity of computation
exploiting those facts. Consider the example I gave earlier of a rock falling
off a cliff. The rock satisfies the law S=1/2gt\u2\d, and that fact is observer independent. But notice, we can
treat the rock as a computer if we like. Suppose we want to compute the height
of the cliff. We know the rule and we know the gravitational constant. All we need is a stop watch. And we can then use
the rock as simple analog computer to compute the height of the cliff.
So what is the difference between the first
case where the rock is just a rock and
is rule described and the second case where the rock is a computer
carrying out a computation implementing exactly the same rule? I think the
answer is obvious. In the second case we have assigned -- that is there is an
observer relative assignment of -- a computational interpretation. But what is true of the rock is true of
every computer. What is special about the rock is that the law of nature and the implemented algorithm are the same.
In a commercial computer we exploit the
laws of nature to assign other algorithms, for addition, subtraction, word
processing, etc. But the general
principle is this: We cannot appeal to the analogy between the computer and the brain to justify the
special character of the tripartite
model as applied to the brain because something is a computer only relative to a computational interpretation.
What I have tried to show with the parable of the falling rock, is that one
and the same mathematical description can be treated both as a description of
an observer-independent process, and as an observer relative computation. It is
just a fact about the stone that it falls in accordance with the laws of
physics. There is nothing observer-relative
about that. But we can use this
Fact of
physics for our own computational purposes. If we treat that fact computationally, if we use the stone to
carry out a computation, then that computation only exists relative to us just
as the computation exists relative to us when we use a pocket calculator.
I think that you can see this point if I
give you a simpler example. If it is a fact that there are three cows in
one field and two cows in the next field, both
are observer-independent facts.
But if I then decide to use these facts in order to perform a
mathematical calculation, and I add three plus two to get five by counting the
cows, the computational process of addition is not something that is intrinsic
to the cows in the field. The process
of addition is a process that I perform using the cows as my adding machine.
Now, what is true of the rock and
the cows in the field is true of
computation, generally. If I am
consciously doing arithmetic, that computation is intrinsic. If a pocket adding machine is doing arithmetic,
that is observer relative. It is worth pointing out, by the way, that over the
years the word ``computer'' has changed its meaning. When Turing wrote his
famous 1950 article, the word ``computer'' meant ``person who computes.'' That
is why Turing called the article ``Computing Machinery, and Intelligence"
not "Computers and
Intelligence". Computers meant
``people who compute.'' Nowadays, the word ``computers'' has changed its
meaning from observer-independent to an observer-relative phenomenon. ``Computer'' now refers to a class of
artifacts.
VII. Information Processing in the
Brain.
The crucial question for the classical
model can now be stated with more precision.
What fact exactly corresponds to the claim that there is an algorithmic level of information processing
in the brain? And what fact exactly
corresponds to the claim that everything going on at this level reduces to a
level of primitive processing which consists entirely in the manipulation of
binary symbols. And are these
computational information
processes observer independent or observer relative?
As a first step lets ask how the
proponents of the model think of it themselves. The answer to that question is
not as clear as it ought to be but I think the answer is something like this.
At this level, the brain works like an
ordinary commercial computer. Just as there are symbols in the computer which
are information bearing, so there are sentences in the head and they too are
information bearing. Just as the
commercial computer is an information processing device so is the brain.
This answer is unacceptable. As we have
already seen, in the commercial computer the symbols, sentences,
representation, information and computation are all observer relative.
They exist as symbols, etc.,
only relative to us. Intrinsically speaking the commercial computer is
just a complicated electronic circuit. For the commercial computer to meet the
causal reality constraint we have to appeal to the outside programmers,
designers and users who assign an interpretation to the input, to the processes
in between, and to the output. For the commercial computer, we are the
homunculi who make sense of the whole operation.
This sort of answer can never work for
Ludwig because whatever else he is, he is a conscious intentional agent trying
to do something, trying to catch a
tennis ball; and all of that is intrinsic to him, none of it is observer
relative. We want to know how he himself really works, what his intrinsic
mechanisms are, and not just what sorts of stances we might adopt toward him or
what computational interpretations we
might impose on him.
Well, why can't Ludwig be computing
intrinsically, why can't he be carrying
out algorithms unconsciously the way I
carry out algorithm for long division consciously? We can say that, but
if we do we have abandoned the model,
because now the explanatory mechanism
is not the algorithm, but the mental agent inside who is intentionally going through the steps of the algorithm. This answer, in short, commits the homunculus
fallacy. We don't explain Ludwig's intentionally-trying-to-catch-the-ball behavior by an algorithm if we have to
appeal to his intentionally-carrying-out-the-parabolic-trajectory-computation
behavior and then explain that in turn by his intentionally-going
-through-millions-of-binary-steps-behavior. The explanatory mechanism of his
system is his irreducible intentionality. The idea of the model was that the
information in the system is carried along by the computational operations over
the syntax. The semantics just goes
along for the ride. But on this
analysis it is the syntax that is going along for the ride. The intrinsic intentionality of the agent is doing all
the work. To see this point notice that the
psychological explanation of my
doing long division is not the algorithm, but my mastery of the algorithm
and my intentionally going through the steps of the algorithm.
The upshot can be stated in the form of
a dilemma for the classical model: either the crucial notions are taken in an
observer relative or in an observer independent sense. If observer relative then the explanation
fails because it fails to meet the causal reality constraint. If observer
independent then it fails because of the homunculus fallacy. The homunculus is
doing the work. You get a choice between an outside homunculus (observer relative) or an inside homunculus (observer independent). Neither option is
acceptable.
VIII. Deep Unconscious Rule
Following.
I think one way to meet my argument would
be to offer a convincing existence
proof to the contrary. Are there
convincing and unproblematic examples of deep unconscious computational rule
following?
I have argued elsewhere that a specific
aspectual shape requires accessibility to consciousness at least in principle.
In many cases, blind sight for
example, the content is not accessible
to consciousness, in fact, but we understand such cases precisely as
pathological, as due to deficits,
repression, etc. I wont repeat that
argument here but will try to ask a different question, are there any
unproblematic examples of deep unconscious rule following?
If we had some convincing examples, then
we would have fewer doubts about the overall principle. If we could agree that there are cases of
rule following in this technical sense which departed from our ordinary common
sense notion of rule following, and could agree further that these explanations
had genuine explanatory power, then we would at least have a good beginning for
a justification of a general cognitive science strategy postulating such deep
unconscious rule explanations. The two
examples that have been presented to me are the operation of Modus Ponens and
other logical rules, and secondly, the operation of the vestibular ocular
reflex. (There is a certain irony about
the VOR because I have earlier presented it as what I thought was a clear
example of a case that looked as if it satisfied the causal reality constraint,
but where it was obvious that it didn't.)
I will consider each of these in turn.
People have a capacity for making logical inferences. They do this, so the account goes, by following rules that they
are totally unaware of and that they could not even formulate without professional assistance. So, for example, people are able to make
modus ponens inferences, and thus follow the rule of modus ponens, even though
most of them could not formulate the rule of modus ponens, and indeed, do not
have the concept of modus ponens.
Well, let's try this out and see how it
works. Here is a typical inference
using modus ponens. Before the 1996
election I believed that if Clinton could carry the state of California, he
would win the election. Having looked
over the poll results in California, I come to the conclusion that Clinton
would carry California, so I inferred that he would win the election. Now, how did I make that inference?
Well, the cognitive science explanation
would go: When you made the inference
you were in fact following an unconscious rule. This is the rule of modus ponens, the rule that says if you have premises of the form p, and if p then
q, then you can validly infer q. It
seems to me, however, that in cases like this, the rule plays no explanatory
role whatsoever. If I believe that
Clinton will carry California, and believe that if he carries California he
will win the election, that is already enough to enable me to infer that he
will win the election. The rule adds
nothing to the explanation of my inference.
The explanation of the inference is that I can see that the conclusion
follows from the premises. But doesn't the conclusion only follow from the
premises because it instantiates the rule of modus ponens---doesn't it derive
its validity of modus ponens? The
answer to these question is clearly, No.
Modus ponens construed as syntactical computational rule, is simply a
pattern that we use for describing inferences that are independently valid. We don't follow
the rule of modus ponens in order to make the inference. Rather, we make the valid inference, and the
logician can formulate the so-called rule of modus ponens to describe an
infinite number of such valid inferences.
But the inferences do not derive their validity from modus ponens. Rather, modus ponens derives its validity
from the independent validity of the inferences. To think otherwise leads to
the Lewis Carroll paradox.[1].
So, it seems to me modus ponens plays no explanatory role whatever in an
inference of the sort I just described.
But what about purely formal proof
theoretic inferences? Suppose I just
have a bunch of symbols and I infer from p and p arrow q to q? Now, it seems to me, that once we have
subtracted the semantic content from the propositions, there actually is a role
for the rule of modus ponens. But then
precisely because there is such a rule, we are no longer talking about valid
inferences as part of human cognitive processes. We are talking about a formal
analogue to these valid inferences in some formal proof theoretic system. That is, if you are given a rule that says
whenever you have symbols of the form:
"squiggle blotch sguaggle", followed by "sguiggle",
you can write down "squaggle", that is a genuine rule. It tells you what you can do in certain
circumstances and it has all of those features that I described as typical of
rule governed behavior, or rule explanations---every single one. But that is precisely not the operation of
the rule of modus ponens in ordinary reasoning. To put this point precisely, if we think of modus ponens as an
actual description of the operation of mental contents, then modus ponens plays
no explanatory role in valid inferences. If we think of it as a proof
theoretical rule describing operations on meaningless symbols, then it does
indeed plays a role, but its role is not that of explaining how we actually
make inferences in ordinary cognitive processes, but how we can represent the
formal or syntactical structure of those inferences in artificially created
systems.
I now turn to the vestibular ocular
reflex. It looks as if we are
unconsciously following the rule:
``Move the eyeball equal and opposite to the movement of the head,''
when in fact we are not following any such rule. There is a complex reflex mechanism in the brain that produces
this behavior. I thought the point was obvious, but not so.
Recently, some of my critics have said that there are even subdoxastic computational states intrinsic to the system that
are at a more fine grained level than the rule I just stated. Martin Davies says,
"Another way to describe the VOR is
as a system in which
certain information processing takes
place, not just from
head movements of certain velocities to
eye movements of
certain velocities, but from
representations of head
movement velocities to representations of eye movement
velocities..... It is only against the
background of this
second kind of description that there is
any question of
crediting the system with tacit knowledge of the rules
relating head velocity to eye
velocity."[1]
p.386
This assumption of "semantic
content" in the input and output states is a necessary but not a
sufficient condition of tacit knowledge
of rules. The sufficient condition requires that "the various
input-output transitions that are in conformity with the rule should have the
same causal explanation" p. 386
The VOR easily satisfies that conditions
so it turns out that the VOR is a case
of unconscious tacit knowledge of rules and
is a case of rule governed behavior.
To support this Davies gives various statements of computational
descriptions of the VOR from David Robinson,
Patricia Churchland, and Terry Sejnowski. He thinks mistakenly that I am arguing that the computational
ascriptions are trivial. But that is
not my point. My point is about the
psychological reality of the
computational ascriptions. I see no
reason to treat the computation description of the VOR any differently than the
computation description of the stomach or other organs. My question is, is there a causal level
distinct from the level of the
neurophysiology at which the agent is actually
unconsciously carrying out certain computational, information processing task in order to move his
eyeball? I see nothing in Davies's account to suppose that the postulation of such a level meets the causal reality
constraint. What fact about the
vestibular nuclei makes it the case that they are carrying out specifically mental operations at the level of intrinsic intentionality? I do not see an
answer to that question. It is not an objection to the usefulness of the
computational models of the VOR to point out that they are models of
neurophysiology not examples of actual psychological processes, they are at the level of observer relative
neuronal information, processing not intrinsic intentionality. It is one thing
to have a computational description of a process, quite another to actually carry
out a mental process of computing.
IX. Conclusion
On the account I am proposing
computational descriptions play exactly the same role in cognitive science that
they play in any other branch of
biology or other natural sciences. Except for cases where an agent is
actually intentionally carrying out a computation, the computational description does not identify a separate causal
level distinct from the level of the physical structure of the organism. When you give a causal explanation,. always ask yourself what
causal fact corresponds to the claim you are
making. In the case of computational descriptions of deep unconscious
brain processes, the processes are rule described and not rule governed.
And what goes for computation goes a
fortiori for "information processing". You can give an information processing description of the brain
as you can of the stomach or an internal combustion engine. But if this is to
be psychologically real it must identify a form of information that is
intrinsically intentionalistic, and cognitive science explanations using the
deep unconscious typically fail to do that.
I would like to conclude this discussion
with a diagnosis of what I think is the mistake. It is very difficult for human
beings to accept non-animistic, non-intentionalistic forms of explanation. In our culture we only fully came to accept such explanations
in the seventeenth century. Our
paradigmatic forms of explanation are
intentionalistic: I am eating this food because I am hungry, I am drinking this
water because I am thirsty, I am driving on the left because that is the rule of the road. The idea that there are mechanical
explanations that cite no intentionality is a very hard idea to grasp. A form
of animism still survives in cognitive science. Marr's intermediate level of
rule following at the subdoxastic level in the brain is a form of animism. Now,
since these postulated processes are not conscious, are not even accessible to
consciousness in principle, we postulate deep unconscious rules following behavior.
This is the mistake of primitive animism. Now, this is aided by a
second mistake: We are misled by
the apparent intentionality of computers, thermostats, carburetors and other
functional systems that we have designed.
It seems obvious to us that these systems have an intentionalistic level
of description. Indeed, standard
textbooks of cognitive science give
Marr's intermediate level description of the thermostat, as if the algorithmic
level explanation obviously satisfied the causal reality constraint. But I think it is clear that it does not.
The intentional, rule-following computation of the thermostat is entirely
observer relative. It is only because
we have designed and used these systems that we can make intentionalistic explanations at all. Now, what goes for the thermostat goes for
other functional systems, such as clocks, carburators, and above all,
computers. So, we are making two
mistakes. The first is a mistake of
preferring animistic over naturalistic
explanations, and the second is the failure to make the distinction between
observer-relativity and observer-independence. In particular, we fail to
distinguish the cases where we have genuine intrinsic intentionality from the cases of observer-relative
intentionality. The intentionality in thermostats, clocks and computers is
entirely observer relative.
Now, the hard thing to see is that
many of the intentionalistic
descriptions of brain processes are also observer relative, and consequently,
they do not give us a causal explanation.
What then is the correct model for cognitive science explanation? And, indeed, how do we account for much of
the apparent rationality of cognition if we do not postulate rule-governed
behavior at Marr's intermediate level?
To answer this, it seems to me we have to remind ourselves of how Darwin
solved a similar problem by showing that the apparent goal directedness in the
structure of species could be explained without postulating any intentionality. Darwin substituted two explanatory levels
for one. Instead of saying ``the fish
has the shape it has in order to survive in water,'' we say 1) The fish has the shape it has because of its
genetic structure, and 2) fish that have that shape are going to survive better
than fish that don't. Notice that
survival stills functions in the explanation but it is no longer a goal. It is just something that happens.
Now, analogously, we should not say ``The eyeball moves because it is
falling a rule of the vestibular ocular reflex." We should say that the eyeball moves because of the structure of
the visual system---it is just a mechanical process. There is no rule
following at all. The rule,
however, does describe the behavior of the eyeball and the eyeball
satisfies that description for basically Darwinian reasons. Eyeballs that behave that way are going to
produce a more stable retinal image, and organisms that have a stable retinal
image are more likely to survive than organisms that don't. Analogously, Ludwig
does not follow the parabolic trajectory rule, rather he tries figure where the
ball is going to be and jump to put his
mouth at that point. He has paw - eye coordination skills which can be described
by the parabolic trajectory rule, but he is not following that rule. Dogs that
can develop such skills are more likely to survive than dogs that don't -- or
at least they are more likely to catch tennis balls.
.sp \fBAppendix
The heart of the argument is this: The computational attribution to the human
brain is either intrinsic or observer relative. That is, either we are to think of the brain as performing
computations intrinsically or we are to think of it as relative to an outside agent. Well, if it is observer relative, then it
doesn't satisfy the causal realty constraint, because we are not talking about
something computational that is going on intrinsically, we are just talking
about some outside interpreter interpreting it that way. But if it is intrinsic, then we have another
problem and that is we have committed the homunculus fallacy. Here is why.
The whole doctrine of recursive decomposition
requires that there be a homunculus inside who is thinking in terms of zeroes
and ones. That is, it is not enough to
suppose that it is like the guy following the rule, ``drive on the lefthand
side of the road,'' because the whole doctrine of computationalism is that the
guy has to have set of symbols that he
is intentionally manipulating. That is
how we satisfy the causal reality constraint.
But that means that the content is not stated in terms of commonsense
notions like, "Drive on the lefthand side", it is stated all in terms
of zeroes and ones, and that is going to require a homunculus. So, we have a
homunculus fallacy. The real work is
done by the homunculus, not by the symbols themselves.
We will understand this point better if we go
into the history of the subject. The
initial idea of computation is that psychological process of computing
is performed by conscious human beings.
So you add ``3 + 5'' and get ``8'' for example. This was Hobbes' idea, when he said ``All
reasoning is but reckoning.'' He meant "reckoning" in the commonsense
meaning of the term, in which for example
you add and subtract. Now, what
we have discovered is that you can do these things with machines. But in what sense do you do these things ? That is, what is the
same and what is different in the machine?
Well, the machine is not supposed to be conscious, indeed it not
supposed to have mental states at all.
The machine just goes through certain formal analogues of reasoning, and
we discovered because of Church's Thesis, and the invention of a Turing
machine, that you can "do these things" with binary symbols. Now, those are very important discoveries,
but then something paradoxical happened.
We tried to read back the machine process into the brain. But of course, the brain is quite different. When we are reckoning in Hobbes's
sense, we are consciously going through
certain mathematical processes. But the
reckoning done by the machine is now entirely observer relative. It requires an outside interpreter to interpret these symbol
manipulations as "reckoning" at all.
\fB Basic cognition
I need to introduce a new notion, the notion
of basic cognition. Many years ago, Arthur Danto introduced the notion of a
basic action. And I am not sure exactly
what Danto meant, but the way I have always used this notion and the way that I
found it useful is this: There are a
lot of things that we do intentionally without intending to do anything else
by way of which or by means of which we
do these things. So, if you ask me, how
do you get to San Jose, I will describe a series of steps by means of which I
get to San Jose. I you ask me, how do I
raise my arm, the answer is, I just do it.
I don't do it by means of doing anything else. A basic action, then, on my definition is an action that you can
do intentionally without intending to do anything by means of which you do that
thing. It is obvious from the
definition that what is basic for one agent may not be basic for another.
Now, I want to suggest that the notion of
a basic action so-defined, is just an
instance of a much more general notion, the notion of a basic cognition. A basic cognition is any cognition that I
can have without having some other cognitive state or process by way of which
or by means of which I have the cognition in question. Thus, if you ask me, how do I find my
cognitive science directory on my computer, I can tell you a series of steps
that I go through. But, if you ask me
how do I see the computer, there isn't any cognitive answer to that---I just do
it. There is a brain process answer
explaining how it works in the
brain, but that answer is nonintentionalistic.
So, we might say that just as every complex
action must presuppose the notion of a basic action, so every complex cognitive
state or process presupposes basic cognition. The reason for this is that the
answer to the question, ``how do you do it?'' cannot go on for ever. Eventually it must bottom out. There is an answer to the question, ``how do
you start your car?'' but if I say, ``I start my car by turning the key in the
ignition,'' there isn't any answer to the question, ``And how do you turn
the key in the ignition?'' because I
just do it. But if there were an
answer---say, I turn the key in the ignition by grasping the key between thumb
and forefinger of my right hand and rotating my wrist to the right, eventually
I would have to reach a point where the answer is ``I just do it.'' There are
further neurobiological explanations of how it is possible that I perform the
basic actions. These are explanations in terms of calcium ions,
neurotransmitters, etc.
The recognition of a state of basic cognitive
processes then forces us to the following conclusion: All explanations of cognition are either intentionalistic until
they reach the basic form of intentionality, or they are neurobiological. To see this point, let's go back to Ludwig. The explanation of his behavior is that he
is trying to catch the tennis ball and he is able to jump in such a way that he
catches the ball in his mouth and the means by which he does this is to jump to
the point where he thinks ball and mouth will meet simultaneously. Now, there will, in addition, be a rather
elaborate neurobiological explanation about muscle contractions and visual
experiences and the coordination that the brain has between those two, but
there will be no intentionalistic component of those phenomena.
We might summarize the distinction between
the view that I am beginning to put forward and the cognitivist view by saying
that on the view that I am putting forward, the intentionalistic explanations
bottom out in common-sense forms of intentionality and that further
explanations are neurobiological. The
rival view says that the neurobiology is only the implementation of a much more
fundamental form of intentionality which is unconscious and much of it is
sub-personal, below the level of personal awareness.
[1]
John R. Searle, The Rediscovery of the Mind, MIT Press,
Cambridge, 1992
[1]
David Marr, Vision, Freeman and Co. San
Francisco, 1972
[1]
Marr, op. Cit. P..
[1]
Palmer, S and Kimchi ..
[1]
Quine, W.V.O..1972
[1]
Dodgson, C.. What Achilles Said to the Tortoise, Mind , 190x
[1]
Martin Davies in Marshall and Marshall eds. p. 386
19