From owner-modality@LISTSERV.ARIZONA.EDU Tue Apr 20 12:54:49 1999 x-sender: agillies@pop.u.arizona.edu Date: Tue, 20 Apr 1999 13:08:51 -0700 Sender: "Philosophy 596B: Mind and Modality" From: Anthony S Gillies Subject: Re: truth conditions and conceivable worlds To: MODALITY@LISTSERV.ARIZONA.EDU Status: R Here's a bit of an exchange from last week about 2D semantics. I think I have a conjured sentence that fits the bill. I'll put it below. >>This points to another confusion I've been entertaining on 2D semantics. >>It's probably a little off topic, so we'll get to it in a week or two. >>But the main idea is to what extent the 2D semantics for terms meshes >>with the 2D semantics for sentences. In particular, in figuring the >>truth_1 of a sentence S, it looks like every term in S must be mapped to >>its referent_1. Likewise for truth_2. Are there sentences S for which >>the natural truth_1 conditions might depend on some term in S being >>mapped to its referent_2? > >Hmm, interesting. Again, a statement is true_1 at the actual world >iff it is true_2 there. And a term's (actual) referent_1 will always >be its (actual) referent_2 -- both are just the term's actual >referent! It's only in non-actual possible worlds that these things >come apart. But I suppose your suggestion might come to the >possibility that to evaluate the truth_1 of S at some non-actual >possible world, we need to consider the referent_2 of some term in S >there, which would then depend on a posteriori facts about the actual >world. > >The short answer, I think, is that this can't happen. Truth_1 is >defined so that it depends only on facts about the world in question >(plus a priori analysis). Some statements, e.g. "The actual president >is Bill Clinton" might make reference to the actual world, but they >can still be evaluated in a single-world way. E.g. to evaluate the >truth_1 of the statement above at a world where George Bush is >president, we consider that world as actual, and determine that if >that world is actual, the statement is false. That's to say, one >never needs to import the *actual* referent of "the actual president" >in evaluating the truth_1 of this statement at world W. One only ever >imports what the referent would be *if* W were actual. > So here's the sentence: ($) The water in The River Thames is H20. What I want to know is whether this is necessary or not where what we are interested in is how "water" is evaluated in worlds considered as actual, but where "The River Thames" is given the semantic value when considered counterfactually. That is, in computing the semantic value of ($) we want the referent given by the PI of "water" but the SI of "The River Thames". If I'm right, this isn't possible on the 2D account. So there is a space of sentences for which it isn't defined---namely, any sentence where we "cross" intensions of the terms. Thony "Curious green ideas sleep furiously." From owner-modality@LISTSERV.ARIZONA.EDU Tue Apr 20 15:28:32 1999 Date: Tue, 20 Apr 1999 15:22:25 -0700 Sender: "Philosophy 596B: Mind and Modality" From: David Chalmers Subject: Re: truth conditions and conceivable worlds To: MODALITY@LISTSERV.ARIZONA.EDU Status: R >So here's the sentence: > >($) The water in The River Thames is H20. > >What I want to know is whether this is necessary or not where what we are >interested in is how "water" is evaluated in worlds considered as actual, >but where "The River Thames" is given the semantic value when considered >counterfactually. That is, in computing the semantic value of ($) we >want the referent given by the PI of "water" but the SI of "The River >Thames". If I'm right, this isn't possible on the 2D account. So there >is a space of sentences for which it isn't defined---namely, any sentence >where we "cross" intensions of the terms. Hmm, I'm not quite sure what the counterexample sentence is supposed to be. If the sentence is just ($), then it isn't a counterexample, as this has a perfectly straightforward PI and SI. Rather, I take it your counterexample is some modifiction or "interpretation" of ($), on which we "give the terms semantic values" according to their PI and SI respectively in different cases. It's not clear to me in what sense the result is a "sentence". Maybe given a sentence with a well-defined PI and SI, one can define some abstract object which gets evaluated in possible worlds according to some combination of the PIs and the SIs in the original sentence. But this doesn't seem to correspond naturally to anything we find in language. It doesn't seem to be meaningful to talk about the PI or SI of such an abstract object; but that alone isn't obviously a problem for two-dimensional semantics. 2-D semantics is meant to apply to sentences in natural languages, not to arbitrary abstract objects. It seems to me that on any natural reading of ($), it gets evaluated in worlds considered as actual according to the PI of both "water" and "River Thames" -- one can't just stipulate that it will be evaluated according to the SI of "River Thames". At least, if one did, one would have a very different sentence where the word "River Thames" means something different to what it means in ordinary English (a new word whose PI happens to be the same as the SI of our word "River Thames"). Even if one did this, one could now naturally evaluate the PI and the SI of the new sentence in a pretty straightforward way; it would have a PI that differs from that of the original sentence, but that's only to be expected. I may be missing the way you intend the example to be understood, in which case I'd be interested to hear it articulated. --Dave. From owner-modality@LISTSERV.ARIZONA.EDU Wed Apr 21 00:15:08 1999 Date: Wed, 21 Apr 1999 00:14:19 -0700 Sender: "Philosophy 596B: Mind and Modality" From: David Chalmers Subject: Next week To: MODALITY@LISTSERV.ARIZONA.EDU Status: R Dear all, For next week we are focusing on issues re the contents of thought. In particular, we will be focusing on issues about internalism, externalism, and "narrow content", and the application of the 2-D framework to these issues. The central reading is my paper "The Components of Content". You've read this already earlier in the term for the general issues re 2-D semantics; it might be worth reading it again for the more specific issues concerning the narrow content, the explanation of behavior, belief ascription, and so on. There are also a number of background readings that Jodie will photocopy and put in the folder, hopefully tomorrow. These are: Putnam, "The Meaning of `Meaning'" Burge, Excerpt from "Individualism and the Mental" White, "Partial Character and the Language of Thought" Stalnaker, "Narrow Content" Schiffer, "The Mode-of-Presentation Problem". The Putnam and the Burge are the classic sources re externalism. Many or most of you will have read them before. If you haven't, I certainly recommend reading them; if you have, you might look over them again. The White and Stalnaker are two papers on whether we can make sense of a notion of "narrow content" (a sort of content that is "in the head") in response to the externalist arguments. White ends up arguing for a picture that has some similarities to the 2-D framework, while Stalnaker argues against narrow content and on the way argues against the use of the 2-D framework. Both also have some good general discussion. Finally, the Schiffer has good coverage on some of the issues re the semantics of belief ascription that come up in my content paper. The Stalnaker will be more central next week (when we will discuss objections to and problems for the 2-D framework), but you might find it useful to look over this week for its general way of framing the issues. I'm not sure whether we will discuss the White in detail (it was a special request from Brad), but again it is useful and interesting background material. The same goes for Schiffer. Hopefully these papers will help make some of the background to "The Components of Content" fall into place. Finally, the folder will also have a copy of John Burgess's article "Quinus ..." (I've forgotten the full Latin title, but it translates as "Quine Freed of Every Flaw"). That's the article I was discussing today re the debate over quantified modal logic, with some reference to subjunctive and indicative conditionals. This isn't an official reading, but it's a very interesting and enlightening discussion of the issues re quantified modal logic (it should also work well as background for those who don't know much about the debate), and I recommend it to everyone. A reminder that I am going to England tomorrow, and will be arriving back in town next Tuesday at 4pm, so next week's meeting is scheduled for 5:20pm Tuesday. I'll get in touch by e-mail or phone if it looks like I'll be late. Also, as I said in class today, anyone who wants to give me a draft of their term paper by next Friday (April 30) is welcome to do so, and I'll give them comments back. --Dave. From owner-modality@LISTSERV.ARIZONA.EDU Thu Apr 22 16:32:44 1999 Date: Thu, 22 Apr 1999 16:30:55 -0700 Sender: "Philosophy 596B: Mind and Modality" From: Erik J Larson Subject: Re: truth conditions and conceivable worlds To: MODALITY@LISTSERV.ARIZONA.EDU Status: R Regarding Dave's response to Thony's "River Thames" sentence, it strikes me that the 2D framework is immune to the possibility that a space of sentence may have uncomputable truth values (due to mixing intensions), but for a reason that may itself raise a red flag. It's just been *stipulated* that statements get evaluated according to primary or secondary intensions, uniformly for all terms. That keeps Thony's example from gaining any ground, I suppose, but given that certain statements are most naturally interpreted by assigning different intensions (take Thony's example, for one), the stipulation cries out for justification, at least as a general theory of semantics. I suppose the justification is practical, or *methodological*, in that we can apparently put the 2D semantics to good use (eg, in evaluating Kripkean cases). But given that we can't make sense of "The water in the River Thames is H20" by taking the natural intensions of the composite terms, it seems somehow ad hoc. Part of my frustration here stems from my own attempt to formulate statements whose natural interpretation mixes intensions, until I realized (from reading Dave's message) that it's just been *stipulated* that you can't do that. (Trivially, I managed to come up with statements that always end up false when evaluated according to the PI, even when the composite clauses of the sentences get different truth values when evaluated in different worlds considered as actual. Consider "Water is actually H20, but not here." This is equivalent to the conjunction "Water is actually H2O and Water is not H20 here.", where the first conjunct is false and the second true in Twin Earth, and vice versa for Earth. (Since conjunction are false when at least one conjunct is false, the statement is always false no matter in what world considered as actual it is evaluated).) This I say is trivial, because who cares if we can construct sentences contradictory or otherwise according to an intension, as long as our semantics always *gives* us a truth value one way or the other. Thony's example was meant to challenge the latter, but it evidently can't be done, in the same sense that you won't likely find any round squares laying around either. Questions: How seriously do we take the 2D framework as a theory of semantics? Should we be bothered by its inflexibility with respect to Thony's example? To be really speculative, might *adding* the sort of flexibility of evaluation that can produce compositional semantics using different intensions from the terms on up possibly *undermine* the motivation for the anti-materialist argument, but give us new methodological justification for its acceptance nonetheless (because it is a more flexible semantics)? On Tue, 20 Apr 1999, David Chalmers wrote: > >So here's the sentence: > > > >($) The water in The River Thames is H20. > > > >What I want to know is whether this is necessary or not where what we are > >interested in is how "water" is evaluated in worlds considered as actual, > >but where "The River Thames" is given the semantic value when considered > >counterfactually. That is, in computing the semantic value of ($) we > >want the referent given by the PI of "water" but the SI of "The River > >Thames". If I'm right, this isn't possible on the 2D account. So there > >is a space of sentences for which it isn't defined---namely, any sentence > >where we "cross" intensions of the terms. > > Hmm, I'm not quite sure what the counterexample sentence is supposed > to be. If the sentence is just ($), then it isn't a counterexample, > as this has a perfectly straightforward PI and SI. Rather, I take it > your counterexample is some modifiction or "interpretation" of ($), on > which we "give the terms semantic values" according to their PI and SI > respectively in different cases. It's not clear to me in what sense > the result is a "sentence". > > Maybe given a sentence with a well-defined PI and SI, one can define > some abstract object which gets evaluated in possible worlds according > to some combination of the PIs and the SIs in the original sentence. > But this doesn't seem to correspond naturally to anything we find in > language. It doesn't seem to be meaningful to talk about the PI or SI > of such an abstract object; but that alone isn't obviously a problem > for two-dimensional semantics. 2-D semantics is meant to apply to > sentences in natural languages, not to arbitrary abstract objects. > > It seems to me that on any natural reading of ($), it gets evaluated > in worlds considered as actual according to the PI of both "water" and > "River Thames" -- one can't just stipulate that it will be evaluated > according to the SI of "River Thames". At least, if one did, one > would have a very different sentence where the word "River Thames" > means something different to what it means in ordinary English (a new > word whose PI happens to be the same as the SI of our word "River > Thames"). Even if one did this, one could now naturally evaluate the > PI and the SI of the new sentence in a pretty straightforward way; it > would have a PI that differs from that of the original sentence, but > that's only to be expected. > > I may be missing the way you intend the example to be understood, in > which case I'd be interested to hear it articulated. > > --Dave. > Some advice for drivers on the University of Arizona campus: "When passenger of foot heave in sight, tootle the horn. Trumpet him melodiously at first, but if he still obstacles your passage then tootle him with vigor." --From a brochure of a car rental firm in Tokyo ---------------------- Erik J Larson erikl@U.Arizona.EDU From owner-modality@LISTSERV.ARIZONA.EDU Tue Apr 27 11:49:56 1999 Date: Tue, 27 Apr 1999 11:48:10 -0700 Sender: "Philosophy 596B: Mind and Modality" From: Erik A Herman Subject: Intuitions on Components of Content To: MODALITY@LISTSERV.ARIZONA.EDU Status: R I would like to reslove my intuitions with the 2-Framework and I'm not quite sure how to do it. I've listed my intutions on each of the 6 cases: 1) The twins relation to their environment is identical. 2) There is a difference between saying "Hesperus is Hesperus", and "Hesperus is Phosphorus". The first is just a figurative way of saying "a thing is itself", which HAS meaning, just not a very important one because it's so trivial and obvious. The second, on the other hand, is more like saying "two things are the same when you think about it". Content doesn't just denote something's physical embodyment but it's entire meaning. "the former is trivial and the latter is not" 3)Kripke's puzzle happens in real life. And the language issue doesn't do anything but explicate that Pierre doesn't see that THAT'S the "London" (however you wanna say it) that he's been told is pretty. Consider this real life scenario: I was told in phone conversations and e-mails "Mike is a cool guy." Then one night I was at a party and I met a guy, his name was mike. Later, my friend asked if I had met Mike at the party, and being the flaky guy I am, said, "Well yes, I met a guy named Mike, why?" and she responded "THAT'S MIKE!" (meaning, that's THE Mike she had been speaking of in our correspondence). At which time I "linked/combined" these two, otherwise separated references. Now there doesn't seem to be anything confusing about this, and yet it is similar, in the relevant way, to the Londres case. If I had thought Mike was a jerk when I met him, this contradiction would be resolved in the "linking/combining" stage--analyzing my reasons for thinking he's a jerk with her reasons for thinking he's cool and weighing everything in the mix. I might even keep BOTH contents because they differ in their relation to Mike. To make the analogy hold, we could say that at the party he introduced himself as "Michel" or something stupid like that but this doesn't do any work. 4) My favorite video game is a car race ( I forget what it's called) that is at the Flying J truckstop halfway to Phoenix (that's relevant). When I play it and I believe that "I'm" going to crach into a guardrail, I take evasive action. After I'm done playing, I get into the real car and I get yelled at by my girlfriend because I'm apparently driving like I'm still in the video game. The distinction is made in the differing degrees of indistructability. There is no difference IN KIND between the case where I am indestructible and the case where I might actually get hurt, just a severe difference in degree of personal meaning they have. The same concept holds for the difference in degrees of recklessness in driving a car -vs- a motorcycle. 5) "Clark Kent" and "Superman" have two totally different meanings, the only similarity is that they occupy the same body. 6) This one has me stumped. -erik h. From owner-modality@LISTSERV.ARIZONA.EDU Tue Apr 27 13:00:54 1999 Date: Tue, 27 Apr 1999 12:56:54 -0700 Sender: "Philosophy 596B: Mind and Modality" From: Timothy J Bayne Subject: Components of Content To: MODALITY@LISTSERV.ARIZONA.EDU Status: R (1) First, a point about Thony's and Erik L's comments on why mixed intension sentences are naughty. If we take *concepts* as primary, then the strictures against mixed intension sentences would seem to be ad hoc. If, on the other hand, we take *thoughts* as primary, then it might be reasonable to hold that mixed intension thoughts are a no-no, b/c they're not really thoughts. Problem is, most semantics are compositional: take concepts as primary, and build up thoughts out of them. Reply: sure semantics is compositional, but even so, doesn't make any sense (or, doesn't give one a coherent thought), as Thony would no doubt be the first to agree. In the same way, or what-have-you, is also nonsensical. So, we're going to have to give some account of why is not a legitimate thought, why can't we just plug in this account to rule out mixed intension sentences? (2) I'm losing sleep over the issue of whether content is effable. (There are two ways in which one might cash effability out: thinkability, communicability/sayability. I'm going to stick with the latter.) Fodor says that his notion of narrow content is ineffable. I'm worried that *both* of Chalmers's notions of content are ineffable. There are different ways to put my worry and I'm not sure which is best, but here's a stab. Sometimes the account of notional/relations content sounds as if there are actually two distinct concepts per 'folk' concept or word. Or, to put it at the level of thoughts, if thoughts are individuated by their truth conditions, and if 'a' thought interpreted via 1-intension has one truth-condition and 'the same' thought interpreted via 2-intention has different truth-conditions, then there are different thoughts. Thus, it would be posible to have a language in which 1- and 2-intensions are explicitly distinguished at a syntactic level. In the same way that some languages build temporal discriminations into verb endings, our hypothetical language builds intensional discriminations into its words. *Indeed, I take it that this is just what we've done in this seminar when we write "water 1-intension".* Further, I take it that Chalmers sees cognitive psychology as moving towards a more purely notional/1-intensional language. (p. 12) BUT on the other hand, there is evidence that one can not, even in principle, prise the notional apart from the relational in this way. The problem is that if we split folk concepts into two concepts, the notional and the relational, then *THERE IS NO END TO THIS PROCESS, FOR EVERY CONCEPT HAS A PRIMARY INTENSION AND A SECONDARY INTENSION.* In other words, it looks like you're going to get something like a 'what-Achilles-said-to-the Tortoise' kind of problem. Tortoise: "What is the 1-intension of X?" Achilles: "X, Y and Z". T: "Do you mean X, Y and Z in the 1-itension sense, or the 2-intensions sense?" A: "The 1-intension sense". T: "Ah, and that would be?" T: "F, G and H". T: "Ah, do you mean F , G and H in the 1-intension sense, or the 2-intension sense?". and so on. Chalmers says that 'we should not mistake the linguistic expression [of 1-intensions] for the real thing' (p. 13) Fair enough, but if there is a real thing here, should we be able to *unambiguously* refer to it. And it looks like we can't, for every attempt that one makes to refer to notional or relational content *via words* (and what else can one use?) can be interpreted in one of two ways, either 1-intensionally or 2-intensionally. (Actually, I think that there's more than one problem contained in the above, but it's lunchtime and I'm not going to rewrite it.) (3) I'm worried about the implications Kripke's rule-following has for the accessability of notional content. Chalmers says that notional content is always (in principle, at least) accessible. It is internal to the thinker. But consider Kripke's plus/quus concerns. It's not exactly clear what these are, but here's a stab: Do I know what I mean by 'plus' merely by introspecting? No. In fact, I only mean something by plus by 'how I go on', that is, by the rule I follow *over time*. So how can I know what I mean by 'plus' *now* if what I do in fact mean is in part fixed by the rule that I follow? In a nutshell, the worry is that (certain) notional contents are fixed by rules that I follow over time, and these may be cognitively inaccessible to me *at a time*. t. Timothy J. Bayne RM. 213 Social Science Department of Philosophy University of Arizona Tucson, AZ 85721 USA Hm ph. (520) 298 1930 From owner-modality@LISTSERV.ARIZONA.EDU Thu Apr 29 00:33:43 1999 Date: Thu, 29 Apr 1999 00:05:00 -0700 Sender: "Philosophy 596B: Mind and Modality" From: Brad Thompson Subject: Re: Components of Content To: MODALITY@LISTSERV.ARIZONA.EDU Status: R Here are some comments on just some of Tim's message: >Chalmers says that 'we should not mistake the linguistic expression [of >1-intensions] for the real thing' (p. 13) Fair enough, but if there is a >real thing here, should we be able to *unambiguously* refer to it. We do unambiguously *refer* to primary intensions, simply by saying "the primary intension of x". What seems less clear is whether we can *specify* the primary intension, that is, give a conceptual analyses. And it >looks like we can't, for every attempt that one makes to refer to notional >or relational content *via words* (and what else can one use?) can be >interpreted in one of two ways, either 1-intensionally or 2-intensionally. I can't think of two intensions for "the primary intension of 'water'". There can be cases where the primary and the secondary intension coincide, and maybe *complete* conceptual analyses of primary intensions will consist in only these unambiguous statements. If not, can't we just stipulate the intended interpretation like above, by prefacing the statement with "the primary intension of..." or "the secondary intension of..."? >(3) I'm worried about the implications Kripke's rule-following has for the >accessability of notional content. Chalmers says that notional content is >always (in principle, at least) accessible. It is internal to the thinker. >But consider Kripke's plus/quus concerns. It's not exactly clear what >these are, but here's a stab: Do I know what I mean by 'plus' merely by >introspecting? No. In fact, I only mean something by plus by 'how I go >on', that is, by the rule I follow *over time*. So how can I know what I >mean by 'plus' *now* if what I do in fact mean is in part fixed by the >rule that I follow? In a nutshell, the worry is that (certain) notional >contents are fixed by rules that I follow over time, and these may be >cognitively inaccessible to me *at a time*. I do think that you know merely by introspecting what you mean by "plus". I'm not prepared to give a full-scale argument against Kripke though. Would Kripke really say that the meaning of "plus" is fixed by the rule that you follow? This leaves meaning forever indeterminate, since at the time of your death we will still be left with an infinitude of possible rules that you *were* following. It has been awhile since I read it, but I thought that Kripke's resolution of the problem was to go social with meaning. If so, then this can easily be accomodated within the 2d framework. The primary intension of "plus' is [the rule that I am following as specified by my linguistic community] or something like that. This is no different in kind, and no more problematic, than the primary intension of water. Brad From owner-modality@LISTSERV.ARIZONA.EDU Thu Apr 29 15:28:26 1999 Date: Thu, 29 Apr 1999 15:14:13 -0700 Sender: "Philosophy 596B: Mind and Modality" From: Timothy J Bayne Subject: Re: Components of Content (fwd) To: MODALITY@LISTSERV.ARIZONA.EDU Status: R Here's a reply to Brad's objection to my Kripkean worry, I'm still thinking Brad's reply to my first worry. 1st paragraph is my original, then Brad's comment, then my reply. > > >(3) I'm worried about the implications Kripke's rule-following has for the > >accessability of notional content. Chalmers says that notional content is > >always (in principle, at least) accessible. It is internal to the thinker. > >But consider Kripke's plus/quus concerns. It's not exactly clear what > >these are, but here's a stab: Do I know what I mean by 'plus' merely by > >introspecting? No. In fact, I only mean something by plus by 'how I go > >on', that is, by the rule I follow *over time*. So how can I know what I > >mean by 'plus' *now* if what I do in fact mean is in part fixed by the > >rule that I follow? In a nutshell, the worry is that (certain) notional > >contents are fixed by rules that I follow over time, and these may be > >cognitively inaccessible to me *at a time*. > > I do think that you know merely by introspecting what you mean by > "plus". I'm not prepared to give a full-scale argument against Kripke > though. Would Kripke really say that the meaning of "plus" is fixed > by the rule that you follow? This leaves meaning forever > indeterminate, since at the time of your death we will still be left > with an infinitude of possible rules that you *were* following. It > has been awhile since I read it, but I thought that Kripke's > resolution of the problem was to go social with meaning. If so, then > this can easily be accomodated within the 2d framework. The primary > intension of "plus' is [the rule that I am following as specified by > my linguistic community] or something like that. This is no different > in kind, and no more problematic, than the primary intension of water. I think Brad is probably right in thinking that Kripke wants to locate the rules in social practices, but this cannot be accommodated with the framework as presented in "The Components of Content", b/c *I* don't have direct access to the rules of my community. The problem now isn't one of my having access to a diachronic fact at this moment, but *my* having access to a fact that is fixed by social practices. How do I know what rule the rest of the folks in my community are now following? Wouldn't that be like saying that I have direct introspective access to the fact that my community thinks that it is rude to burp at the table? If what I now mean is fixed by socially constituted facts, how do I know what I mean? t. Timothy J. Bayne RM. 213 Social Science Department of Philosophy University of Arizona Tucson, AZ 85721 USA Hm ph. (520) 298 1930 From owner-modality@LISTSERV.ARIZONA.EDU Thu Apr 29 15:43:32 1999 Date: Thu, 29 Apr 1999 15:23:14 -0700 Sender: "Philosophy 596B: Mind and Modality" From: David Chalmers Subject: Re: Components of Content To: MODALITY@LISTSERV.ARIZONA.EDU Status: R Re effability of notional content, and re Kripke on rule-following. Tim writes: >(2) I'm losing sleep over the issue of whether content is effable. (There >are two ways in which one might cash effability out: thinkability, >communicability/sayability. I'm going to stick with the latter.) Fodor >says that his notion of narrow content is ineffable. I'm worried that >*both* of Chalmers's notions of content are ineffable. There are different >ways to put my worry and I'm not sure which is best, but here's a stab. > >Sometimes the account of notional/relations content sounds as if there are >actually two distinct concepts per 'folk' concept or word. Or, to put it >at the level of thoughts, if thoughts are individuated by their truth >conditions, and if 'a' thought interpreted via 1-intension has one >truth-condition and 'the same' thought interpreted via 2-intention has >different truth-conditions, then there are different thoughts. Thus, it >would be posible to have a language in which 1- and 2-intensions are >explicitly distinguished at a syntactic level. In the same way that some >languages build temporal discriminations into verb endings, our >hypothetical language builds intensional discriminations into its words. >*Indeed, I take it that this is just what we've done in this seminar when >we write "water 1-intension".* Further, I take it that Chalmers sees >cognitive psychology as moving towards a more purely >notional/1-intensional language. (p. 12) > > >BUT on the other hand, there is evidence that one can not, even in >principle, prise the notional apart from the relational in this way. The >problem is that if we split folk concepts into two concepts, the notional >and the relational, then *THERE IS NO END TO THIS PROCESS, FOR EVERY >CONCEPT HAS A PRIMARY INTENSION AND A SECONDARY INTENSION.* In other >words, it looks like you're going to get something like a >'what-Achilles-said-to-the Tortoise' kind of problem. Tortoise: "What is >the 1-intension of X?" Achilles: "X, Y and Z". T: "Do you mean X, Y and Z >in the 1-itension sense, or the 2-intensions sense?" A: "The 1-intension >sense". T: "Ah, and that would be?" T: "F, G and H". T: "Ah, do you mean F >, G and H in the 1-intension sense, or the 2-intension sense?". and so on. OK, here's what I think. First, I do want to say that it is *one* concept that has both a primary intension and a secondary intension. E.g. a being might have just the concept of "water" and not the concept of "H2O". So they'll have just one concept in the vicinity, with both a primary intension and secondary intension. The primary intension will pick out watery stuff in all worlds, and the secondary intension will pick out H2O in all worlds, but that is not to say the being has separate concepts of "watery stuff" and of "H2O". There is just the one "water" concept, with a complex structure of semantic evaluation. Of course it will often happen that when a being has a concept with a primary intension and a secondary intension, that being will have two concepts in the vicinity which come close to mirroring each of these. For example, as a matter of fact, in addition to my concept of "water" I have a concept of "watery stuff" and "H2O". It isn't required that I have these distinct concepts to have the concept of "water", though. In effect, what is going on in those cases is that one has an initial concept C1 (e.g. "water") with PI(C1) = P and SI(C1) = S. Then one finds a further concept C2 (e.g. "watery stuff") such that PI(C2) = P and SI(C2) = P. That is, C2 is a concept such that *both* its PI and its SI mirror the PI of C1 -- so we can see C2 as in some sense "isolating" the PI of C1 more purely. And then one finds a still further concept C3 (e.g. "H2O") such that PI(C3) = S and SI(C3) = S. That is, C3 is a concept such that *both* its PI and its SI mirror the SI of C1 -- so we can see C2 as in some sense "isolating" the SI of C1 more purely. All this is nice to "articulate" the 2-D structure of the initial concept P1. But it certainly isn't required just to have the initial concept P1. To do that, one needs just one concept. It's the articulation that requires further concepts. Note that when we "articulate" a concept in this way, your Achilles/Tortoise problem doesn't quite come up, as the further concepts we've introduced have more or less the same PI and SI. Or at least, they are closer to doing so than the original concept. Still, it's not obvious to me to what extent we can really expect the PI and SI of our concepts to be "articulated". As far as I can tell, that more or less requires a reductive definition of either of these intensions in terms that don't involve the original concept. E.g., we want to define the PI and SI of "water" without appealing to water. Now that is sometimes possible and sometimes isn't, as philosophers well know from experience. But it's not clear to what extent such definability is really required to say that content is "effable". Take the concept of "knowledge" -- maybe not definable, but do we want to say it is ineffable? I don't think the PI and SI of our concepts in general will be any worse than this. Why do people sometimes say that the wide content of a concept such as "water" is effable? Perhaps because one can say something like: the SI of "water" picks out the water in all worlds. But that obviously isn't all that helpful -- one is just implicitly appealing to the SI to characterize the SI. And one could do something similar to characterize the PI. Alternatively, maybe they think the content here is effable because one can say the SI of "water" picks out H2O in all worlds. Here we have something like a reductive definition. But (a) it's not clear this is possible for all concepts (it's not obvious that it really even works for "water"), and (b) something similar can work for primary intensions, via e.g. "watery stuff". Of course what goes on is, in effect, that we find another concept ("H2O") which has the same SI as "water", and whose PI is a lot closer to that SI too. So similarly, with the PI, we find another concept ("watery stuff") which has the same PI as "water", and whose SI is a lot closer to that PI too. That seems a legitimate thing to do. Whether this sort of reductive definability is possible for all concepts, who knows? My own view is that it probably isn't possible for some central concepts, such as e.g. concepts of experience, cause, space, and time (cf. discussions of global Ramsification in Jenann's seminar). For other concepts (e.g. knowledge), approximations at reductive definitions may be possible, but not perfect definitions. And for others (e.g. bachelor), there may be near-perfect definitions. It seems to me that this was the position that we knew we were in before thinking about these things in terms of the 2-D framework. The 2-D framework doesn't seem to change things too much. We can try to characterize a concept's PI in terms of the PI or the SI of others concepts, and we can try to characterize a concept's SI in terms of the PI or the SI of others' concepts, and we'll succeed to differing extents in different cases. >Chalmers says that 'we should not mistake the linguistic expression [of >1-intensions] for the real thing' (p. 13) Fair enough, but if there is a >real thing here, should we be able to *unambiguously* refer to it. And it >looks like we can't, for every attempt that one makes to refer to notional >or relational content *via words* (and what else can one use?) can be >interpreted in one of two ways, either 1-intensionally or 2-intensionally. >(Actually, I think that there's more than one problem contained in the >above, but it's lunchtime and I'm not going to rewrite it.) Two things to say here. (1) If we characterize a PI or SI in terms of an expression E such that E's PI and SI are more or less the same, this problem won't really come up. (2) It seems to me that we can avoid ambiguity in any case simply by saying that we are talking about the PI or E, or the SI of E. So we can say: the PI of concept C corresponds more or less to the PI of E, where hopefully E has a more transparent structure. And same for the SI. So hopefully this gives us some sort of effability. Here I am just repeating what Brad said, I think. Sometimes people say narrow content is ineffeble basically because it is hard to characterize the PI of a concept C in terms of the *SI* of another concept E. (E.g., the narrow content of "water" is the function that picks out such-and-such in all counterfactual worlds.) But as I say in the paper, that seems to be something of an unfair requirement. Just as we can characterize the SI of one concept in terms of the SI of others, we can characterize the PI of one concept in terms of the PI of others. Re Kripke on rule-following, Tim wrote: >(3) I'm worried about the implications Kripke's rule-following has for the >accessability of notional content. Chalmers says that notional content is >always (in principle, at least) accessible. It is internal to the thinker. >But consider Kripke's plus/quus concerns. It's not exactly clear what >these are, but here's a stab: Do I know what I mean by 'plus' merely by >introspecting? No. In fact, I only mean something by plus by 'how I go >on', that is, by the rule I follow *over time*. So how can I know what I >mean by 'plus' *now* if what I do in fact mean is in part fixed by the >rule that I follow? In a nutshell, the worry is that (certain) notional >contents are fixed by rules that I follow over time, and these may be >cognitively inaccessible to me *at a time*. Hmm, interesting. I certainly don't claim to have a full answer to the Kripke/Wittgenstein problem. But the sense in which I have access to a PI is not (or at least not obviously) the sense in which I can specify a PI all at once, or grasp it all at one time, or some such. Think again of the case of "knowledge". The sense in which we have access to a PI is the sense in which, *given* a description of a scenario (or ultimately a world), we can say how and whether the PI applies to it. It's best ultimately to do this with thoughts rather than concepts (because of problems with inscrutability of reference). The PI of a thought is a function from worlds to truth-values; we have access to the PI because given a qualitative description of a world, we can determine whether the PI of the thought is true. That's to say, we can evaluate the function in question at any world, and we can do this a priori. It seems to me that we have something like this in the "plus" case. For any given statement involving plus -- "56 + 65 = 121", for example -- we're in a position to know whether it is true or false (of course if it is true in one world it is true in any world). So we have access to the PI of these statements. It's true that in evaluating these statements, one is doing something new. And there is a deep philosophical puzzle about just what in my prior state determines that my answer to this question is "true" rather than "false". But I don't think we need to solve this puzzle here. We can just taken it as an intuitive datum there *is* a correct answer to give to these questions, upon rational reflection, and define the PI in terms of that. The philosophical problem of just how this determinate intensionality is grounded remains, as it does for any semantic theory, but we don't need to solve all our problems at once! For my part, I think this is part of the project of "naturalizing intentionality", one that I haven't talked much about in this seminar. Specifically, what "natural" facts makes it the case that a thinker has a concpt with one PI rather than another? Personally I think this will be grounded in a combination of functioning and phenomenology, by virtue of certain idealization principles which are built into our intentional concepts, but that's a long story. I suppose another way the K/W problem comes up is in yielding a determinate PI for "plus" thoughts involving billion-digit numbers, say, such that I will never in fact be able to give a determinate judgment. Here I need again to appeal to an idealization, going with what a less limited reasoner would say, or what would be the rationally correct thing to say. Again, the question of just what grounds that is tricky, but here I am simply appealing to the very strong intuition that there is indeed a rationally correct answer to these questions. If that intuition turned out to be wrong (perish the thought!), it might then turn out that the PIs of our concepts are less determinate than we think they are. --Dave. From owner-modality@LISTSERV.ARIZONA.EDU Tue Apr 27 11:49:56 1999 Date: Tue, 27 Apr 1999 11:48:10 -0700 Sender: "Philosophy 596B: Mind and Modality" From: Erik A Herman Subject: Intuitions on Components of Content To: MODALITY@LISTSERV.ARIZONA.EDU Status: R I would like to reslove my intuitions with the 2-Framework and I'm not quite sure how to do it. I've listed my intutions on each of the 6 cases: 1) The twins relation to their environment is identical. 2) There is a difference between saying "Hesperus is Hesperus", and "Hesperus is Phosphorus". The first is just a figurative way of saying "a thing is itself", which HAS meaning, just not a very important one because it's so trivial and obvious. The second, on the other hand, is more like saying "two things are the same when you think about it". Content doesn't just denote something's physical embodyment but it's entire meaning. "the former is trivial and the latter is not" 3)Kripke's puzzle happens in real life. And the language issue doesn't do anything but explicate that Pierre doesn't see that THAT'S the "London" (however you wanna say it) that he's been told is pretty. Consider this real life scenario: I was told in phone conversations and e-mails "Mike is a cool guy." Then one night I was at a party and I met a guy, his name was mike. Later, my friend asked if I had met Mike at the party, and being the flaky guy I am, said, "Well yes, I met a guy named Mike, why?" and she responded "THAT'S MIKE!" (meaning, that's THE Mike she had been speaking of in our correspondence). At which time I "linked/combined" these two, otherwise separated references. Now there doesn't seem to be anything confusing about this, and yet it is similar, in the relevant way, to the Londres case. If I had thought Mike was a jerk when I met him, this contradiction would be resolved in the "linking/combining" stage--analyzing my reasons for thinking he's a jerk with her reasons for thinking he's cool and weighing everything in the mix. I might even keep BOTH contents because they differ in their relation to Mike. To make the analogy hold, we could say that at the party he introduced himself as "Michel" or something stupid like that but this doesn't do any work. 4) My favorite video game is a car race ( I forget what it's called) that is at the Flying J truckstop halfway to Phoenix (that's relevant). When I play it and I believe that "I'm" going to crach into a guardrail, I take evasive action. After I'm done playing, I get into the real car and I get yelled at by my girlfriend because I'm apparently driving like I'm still in the video game. The distinction is made in the differing degrees of indistructability. There is no difference IN KIND between the case where I am indestructible and the case where I might actually get hurt, just a severe difference in degree of personal meaning they have. The same concept holds for the difference in degrees of recklessness in driving a car -vs- a motorcycle. 5) "Clark Kent" and "Superman" have two totally different meanings, the only similarity is that they occupy the same body. 6) This one has me stumped. -erik h. From owner-modality@LISTSERV.ARIZONA.EDU Mon May 3 13:05:52 1999 X-Accept-Language: en Date: Mon, 3 May 1999 12:22:41 -0700 Sender: "Philosophy 596B: Mind and Modality" From: Josh Cowley Subject: PIs as narrow content To: MODALITY@LISTSERV.ARIZONA.EDU Status: R I'm having a worry about PIs being something that is entirely in the head (i.e.. internal). In particular I'm not sure if they can do the work we want them to if they are restricted to the head. Here is a sketch of the argument. PIs are supposed to be functions from worlds to referents. Dave suggested that PIs might supervene on the physical/phenominal state of the individual. The problem is combining 2D semantics with a reductive theory semantics. A reductive theory of semantics is one which does not use any semantic terms in its terminology. Some of what people have been worried about in cashing out narrow content is how something that is *just* internal is going to pick out referents in the world. For example, if you are not allowed to use semantic terms it is hard to see how my concept "water" picks out water in the actual world. Since we are not allowed to use any sort of physical or causal chain to connect "water" to water, one wonders why it doesn't pick out toothpicks. People have attempted to give reductive semantic theories which are in the head. I want to get on to why this is a problem for 2D semantics so I'll just give a quick example to motivate the problem. Conceptual role semantics (CR) claims that a concept gets its meaning by the role that it plays in a persons conceptual structure. For example, the role it plays in inferences, belief generation etc. Now this is clearly all in the head, but the question is whether it is enough to get reference. There are two ways we can go about getting the referent of "water." The first is to show the relationship between water and other concepts. For instance, "water" is a clear liquid which quenches thirst. It isn't an animal. It makes up the bulk of the lakes and rivers etc. This fits well with 2D semantics, but it defines "water" by other terms whose meaning is fixed. So this isn't really a reductive theory. The second way is to look at all the relations between all concepts, then find a set of objects and relations in the world which has a 1to1 correspondence to the relations of the concepts. The problem with this method is that you can always find some set of relations holding between objects which will be in 1to1 correspondence with the conceptual role relations. That was a little sloppy, but others have argued for it more carefully and similar arguments have been raised against other internal semantic theories. So suppose it is true that no strictly internal theory can determine a referent. Why does this cause problems for saying that PIs supervene on the physical state of the brain. Well, a PI is a function from worlds to referents. In order to be a function it has to pick out only one referent per world. But internal theories don't seem able to do this. From owner-modality@LISTSERV.ARIZONA.EDU Mon May 3 13:25:00 1999 Date: Mon, 3 May 1999 13:23:05 -0700 Sender: "Philosophy 596B: Mind and Modality" From: Erik J Larson Subject: Re: PIs as narrow content To: MODALITY@LISTSERV.ARIZONA.EDU Status: R I may be missing Josh's point here, but I gather that he thinks that PI's are problematic because they are functions, and functions have to pick out unique referents, but PI's are committed to internal semantic theories, and these don't seem capable of pin-pointing unique referents. I have a couple of comments, although I'm not entirely sure they are on track with what Josh is getting at. First, on the 2D view, both primary and secondary intensions are functions, and hence both are committed to returning unique referents as outputs. The problem, I suppose, is that only primary intensions are supposed to be confined to the head, and so are vulnerable to problems with internal theory semantics. It seems like one could argue that there isn't much of a distinction here however, and that primary and secondary intensions are functions that are either both both vulnerable to problems with internal semantics, or both okay, for roughly the same reasons. However we pick out water, as the clear drinkable liquid of a PI, or the H2O or XYZ of a SI, it seems that the story of how we manage to refer to what we do is going to pretty similar in both cases. So, we might have a problem with our semantics getting the unique referent of a function, but it will apply equally to watery stuff or H20. More generally, I think that worrying about reference re functions is an interesting problem, but it probably applies to some pretty standard thought (eg arithmetic). I take it this is Kripke's point in his "Kripgenstein" argument. So, I suppose this means we up against a puzzling situation, but it's no problem for PI's alone, as Josh seems to suggest. Erik On Mon, 3 May 1999, Josh Cowley wrote: > I'm having a worry about PIs being something that is entirely in the > head (i.e.. internal). In particular I'm not sure if they can do the > work > we want them to if they are restricted to the head. Here is a sketch > of the argument. > > PIs are supposed to be functions from worlds to referents. Dave > suggested that PIs might supervene on the physical/phenominal state of > the individual. The problem is combining 2D semantics with a > reductive theory semantics. A reductive theory of semantics is one > which does not use any semantic terms in its terminology. Some of > what people have been worried about in cashing out narrow content is > how something that is *just* internal is going to pick out referents > in the world. For example, if you are not allowed to use semantic > terms it is hard to see how my concept "water" picks out water in the > actual world. Since we are not allowed to use any sort of physical or > causal chain to connect "water" to water, one wonders why it doesn't > pick out toothpicks. > > People have attempted to give reductive semantic theories which are in > the head. I want to get on to why this is a problem for 2D semantics > so I'll just give a quick example to motivate the problem. Conceptual > role semantics (CR) claims that a concept gets its meaning by the role > that it plays in a persons conceptual structure. For example, the > role it plays in inferences, belief generation etc. Now this is > clearly all in the head, but the question is whether it is enough to > get reference. There are two ways we can go about getting the > referent of "water." The first is to show the relationship between > water and other concepts. For instance, "water" is a clear liquid > which quenches thirst. It isn't an animal. It makes up the bulk of > the lakes and rivers etc. This fits well with 2D semantics, but it > defines "water" by other terms whose meaning is fixed. So this > isn't really a reductive theory. The second way is to look at all the > relations between all concepts, then find a set of objects and > relations in the world which has a 1to1 correspondence to the > relations of the concepts. The problem with this method is that you > can always find some set of relations holding between objects which > will be in 1to1 correspondence with the conceptual role relations. > > That was a little sloppy, but others have argued for it more carefully > and similar arguments have been raised against other internal > semantic theories. So suppose it is true that no strictly internal > theory can determine a referent. Why does this cause problems for > saying that PIs supervene on the physical state of the brain. Well, a > PI is a function from worlds to referents. In order to be a function > it has to pick out only one referent per world. But internal theories > don't seem able to do this. > Some advice for drivers on the University of Arizona campus: "When passenger of foot heave in sight, tootle the horn. Trumpet him melodiously at first, but if he still obstacles your passage then tootle him with vigor." --From a brochure of a car rental firm in Tokyo ---------------------- Erik J Larson erikl@U.Arizona.EDU From owner-modality@LISTSERV.ARIZONA.EDU Tue May 4 10:47:25 1999 Date: Tue, 4 May 1999 10:43:42 -0700 Sender: "Philosophy 596B: Mind and Modality" From: Erik A Herman Subject: content To: MODALITY@LISTSERV.ARIZONA.EDU Status: R I think that only wide content exists. The subject's relation to a concept is what teh concept consists in. For instance, there is nothing internal to Bert that distinguishes the two worlds. There is no "hard content" where there is a one to one correspondence between the content across worlds and the object itself. the mind just aims, shoots, and hopefully hits. It seems odd for me to be able to vary worlds and keep contents constant. Every subject is centered on their respective world--I don't get how the content could possibly hold across worlds or what such a concept could possibly mean. Where this is particularly clear to me is the case where Bert might say: "that is pretty" (and points). So I would guess that I agree with Loar if I understand his realization conditions correctly. In my view, contents are definitely context dependent and loose their meaning when extrapolated from their respective world. Erik H. From owner-modality@LISTSERV.ARIZONA.EDU Tue May 4 14:38:32 1999 Date: Tue, 4 May 1999 14:37:16 -0700 Sender: "Philosophy 596B: Mind and Modality" From: Anthony T Lane Subject: Re: narrow content and justification To: MODALITY@LISTSERV.ARIZONA.EDU Status: R Thony makes the case for the importance of narrow content and draws the following conclusion: > > I'm being unreasonable in the first case, and reasonable in the second. > In an important sense, by belief that the thing is red is unjustified in > the first case. If all that matters in beliefs is their wide content, we > shouldn't have conflicting intuitions in these cases: there is no water > that my beliefs are about. We clearly have conflicting intuitions about > the cases, so there is some other kind of content that matters in > justification. > I guess I have some sense of conflicting intuitions in this case, but it seems that the sense of conflict goes away if one considers exactly what question one is asking. If we ask whether the envatted individuals in 1 and 2 are justified in the beliefs they form, then clearly it seems that 1's belief is not justified and 2's is. But it seems to be a different question when we look at them from an external perspective and pronounce that, in both cases, their beliefs are unjustified. It seems that the conflicting intuitions one has in this case just corresponds to which the different answers to the two questiuons. Ultimately, the two questions seems to corresponf to PIs and SIs, and, once again, the disagreement arises from our different inclinations as to which is more essential. Anthony From owner-modality@LISTSERV.ARIZONA.EDU Mon May 3 13:29:50 1999 x-sender: agillies@pop.u.arizona.edu Date: Mon, 3 May 1999 13:43:55 -0700 Sender: "Philosophy 596B: Mind and Modality" From: Anthony S Gillies Subject: narrow content and justification To: MODALITY@LISTSERV.ARIZONA.EDU Status: R Another reason why we need narrow content: although it looks like wide content underwrites belief ascription, narrow content is what underwrites ascription of justification w.r.t. beliefs. Here's a sketch of how. Consider the brain in a vat case. There is nothing in the world (outside the head) which justifies any of my beliefs, since my beliefs are about what I take to be the physical world made up of tables, chairs, water, books, cookies, and so on. But, by hypothesis, there are no such things outside my head; in reality it's jus a big vat, with some liquid and electrodes. That's it. So my beliefs are not about (wide) e.g. water. But, borrowing a nice argument of Stew Cohen's, we still want to distinguish two different possible scenarios in the vat. 1. In the first scenario, suppose I have lots of beliefs about how water can refract light and how this influences the way things appear to me. So, I believe that I can't trust the shape of how things look to me when I believe they are partially immersed in water. Say I think that I see a bent stick, and draw the conclusion that it is bent. (Of course, the conclusion is false b/c I'm really just a brain in a vat.) Then suppose I also form the belief that the thing is partially immersed in water. Nevertheless (and without any other relevant beliefs) I continue to hold on to my belief that the thing is bent. 2. Now suppose everything is as in 1, except that upon forming the belief that thing is partially immersed in water, I withdraw my belief that the thing is bent. I'm being unreasonable in the first case, and reasonable in the second. In an important sense, by belief that the thing is red is unjustified in the first case. If all that matters in beliefs is their wide content, we shouldn't have conflicting intuitions in these cases: there is no water that my beliefs are about. We clearly have conflicting intuitions about the cases, so there is some other kind of content that matters in justification. Thony "Curious green ideas sleep furiously." From owner-modality@LISTSERV.ARIZONA.EDU Wed May 5 12:13:16 1999 X-Accept-Language: en Date: Wed, 5 May 1999 12:09:28 -0700 Sender: "Philosophy 596B: Mind and Modality" From: Josh Cowley Subject: More issues about a reductive theory To: MODALITY@LISTSERV.ARIZONA.EDU Status: R Some thoughts on yesterdays discussion. The PI of a concept is a function from worlds to referents. In order to give a reductive analysis of PIs we are going to have to find something internal that can act as a function from worlds to referents. It seems to me that nothing internal is going to really be a function from *worlds* to referents. Worlds are not psychological entities. Instead we are going to need some psychological entities which represent or perhaps are just in a 1to1 correspondence with worlds (for ease I'm going to suppose they represent worlds). So a reductive analysis of PIs is going to consist in finding a function from psychological representations to referents. Now my worry is what the referents are going to be in this case. The referent either has to be something in the representation or something in the world the representation is of. For example, consider the reductive primary intention of water given a representation of Twin Earth as its input. Either it has to pick out the representation of XYZ in my Twin Earth representation or it needs to pick XYZ in the Twin Earth world. At the moment I don't have any argument that it is going to have to be one or the other. But I do have some genernal concerns. In order to make reference in the actual world work out, the range of the function better be things in worlds and not representations of them. If the range of the function is representations then in the actual world my term "Dave Chalmers" refers to that part of my actual world representation which represents Dave Chalmers. But we want "Dave Chalmers" to refer to Dave Chalmers himself, not a representation of him. Now you might suggest that if you found a function from "Dave Chalmers" to the part of my world representation which represented Dave Chalmers, then you could simply extend the function by saying "Dave Chalmers" refers to whatever is represented by the part of my world representation which is picked out. But this ceases to be a reductive definition. Now we need to know what the Dave Chalmers part of the world representation represents. And that is just more semantics. One last point; I think there are two assumptions underlying this worry. 1) PIs are internal (or supervene on what is in the head). 2) You (or your thought) doesn't have to be in a world in order to refer to something in the world. If you ditch both of these then you can use worlds themselves as the input to even a reductive function. Of course, this raises other problems. Josh From owner-modality@LISTSERV.ARIZONA.EDU Fri May 7 10:07:37 1999 x-sender: agillies@pop.u.arizona.edu Date: Fri, 7 May 1999 10:21:52 -0700 Sender: "Philosophy 596B: Mind and Modality" From: Anthony S Gillies Subject: more on narrow content and justification To: MODALITY@LISTSERV.ARIZONA.EDU Status: R In looking over the Block & Stalnaker and Stalnaker pieces again, I'm getting a bit puzzled why people are so certain that there ain't any such beast as narrow content. This dovetails a bit with what I wrote last time on justification, so I'll expand on that as well. Anthony responded to my call for narrow content this way: >I guess I have some sense of conflicting intuitions in this case, but it >seems that the sense of conflict goes away if one considers exactly what >question one is asking. If we ask whether the envatted individuals in >1 >and 2 are justified in the beliefs they form, then clearly it seems that >1's belief is not justified and 2's is. But it seems to be a different >question when we look at them from an external perspective and pronounce >that, in both cases, their beliefs are unjustified. I think that there is probably an ambiguity on 'justification' lurking around here. In one sense, S's belief in P is justified if S's belief in P correlates or tracks truth to some specified degree. And whether a belief tracks truth is, presumably, a fact external to the believer. This, of course, is the sort of justification that Alvin wants to talk about. Anthony is right that in this sense of 'justificiation' the envatted believer has unjustified beliefs in both cases. But Cohen's argument is that this is an uncomfortable consequence for externalists about justification. We have clear intuitions that go different ways in the two cases. And this is taken to suggest that there is another---perfectly respectable---notion of 'justification' which is essentially procedural in nature. Theories of justification are theories for rational belief acquisition, retention, and revision. And these are notions that can be explicated by appeal to stuff in the head (and nothing else). So there is an equally well-defined sense of 'justification' which relies on the mental states of the cognizer. And wide content cannot make sense out of this sort of justification. And that, in turn, suggests that there is another sort of content that must do the job. It's been a long time coming, but I think people in epistemology are starting to come around to recognizing that there are at least these two different senses of 'justification', and that one need not be taken as more basic than the other. (Histroical note: arguably, the internal sense of 'justification' is the sense we find in Descartes and Hume.) If that's right, then if it's also right that narrow content is needed for internalist epistemology, then we need not think that wide content is basic and narrow content derived, or vice-versa. So, when Block&Stalnaker, and Stalnaker complain that narrow content can't be cleanly derived from wide content, an answer (probably an answer that Dave wants to give anyway) is basically a "So what?" reply: So narrow can't be derived from wide. Big deal. Internalist justification can't be derived from externalist justification, direct realism can't be derived from reliabilism. That just points to there being *two* phenomena in need of explanation. There is a clear sense of narrow content, and everybody knows it: it's that content that is needed (whatever the theory turns out to look like) that undergirds procedural justification. So there. Cheers, Thony "Curious green ideas sleep furiously." From owner-modality@LISTSERV.ARIZONA.EDU Wed May 5 22:40:17 1999 Date: Wed, 5 May 1999 22:38:57 -0700 Sender: "Philosophy 596B: Mind and Modality" From: Erik J Larson Subject: Frogs, flies, salamanders, and baseball hats To: MODALITY@LISTSERV.ARIZONA.EDU Status: R On Wed, 5 May 1999, Josh Cowley wrote: > Some thoughts on yesterdays discussion. The PI of a concept is a > function from worlds to referents. In order to give a reductive > analysis of PIs we are going to have to find something internal that > can act as a function from worlds to referents. It seems to me that > nothing internal is going to really be a function from *worlds* to > referents. Worlds are not psychological entities. Instead we are > going to need some psychological entities which represent or perhaps > are just in a 1to1 correspondence with worlds (for ease I'm going to > suppose they represent worlds). So a reductive analysis of PIs is > going to consist in finding a function from psychological > representations to referents. My post starts here: Some thoughts on primary intensions and reductive analysis. We want a reductive analysis of a primary intension that satisfies the following constraints. One, it is, as Josh says, "in the head" or psychological and two, it is a reduction of the primary intension as a function from centered worlds to referents. So, what is a psychological reduction of a function from centered worlds to referents? That's an interesting question. As Josh notes, "worlds" are not the sorts of objects that we can leave alone in a psychological reduction, because of course the brute notion of a possible world is not something that is likely to be have a psychological realization simpliciter. So we need something more psychologically plausible, perhaps symbols in a LOT or at least something broadly representational. Now, I don't have any idea how this would work, and there is of course all the standard objections to reductive semantics of this sort. But it occurs to me that the characterization of a primary intension as a function is not problematic for psychological expplanation per se, or if it is, it isn't any more puzzling than other functions that we employ in cognition. A rational agent can do arithmetic. That's in the head. It takes as input a specified domain--natural numbers, not worlds--and produces a unique output, the set of which constitute the range (a bunch more natural numbers in this case). But it seems that this is all in the head too. And so we have this miracle, internalist function--arithmetic! And it needs some psychological reduction, but we can't just take the NATURAL NUMBERS as psychologically brute, because those things are not psychological entities. And you can see the rest. So, my point is just this. With Josh, I agree that there is a deep puzzle about how we could get an internal, psychological reduction of something like a primary intension, but unlike Josh (perhaps) I don't see this as a problem for a primary intension alone, especially when considered as a function whose domain and range are not prima facie psychological entities. This is just what we expect from the functions we commonly employ in cognition, whether balancing our check book or considering what the referent of a term in some world considered as actual is. Of course, there is a lot of ambiquity here about just what we mean by a reductive explanation in these cases, and just how far down it has to go. On a broad scale this sort of concern reflects the confusion over what to do with "rationality" and conditions of normativity generally in psychological reductions of "thinking". So, consider: 1)we have the ability to use functions with potentially infinite domains and ranges in cognition and 2)by virtue of rational reflection, we have the ability to see whether the output of our functions is correct. It seems to me that there is some puzzle about 1) (eg to what extent do we have to represent all this stuff cognitively, and how exactly is it represented?) but that 2) is the crux of the puzzle about giving a reductive analysis of a primary intension. I can return a referent as the output from a centered world. How is this referent the right one? Well, what is the intuitive, rational thing to say about cases where primary intensions are evaluated? That question is just the question of how we can get a psychological account of rationality, and that is a tricky one indeed. How do we get it from causal or functional processes, or even more puzzling, from a bunch of grey stuff in our skulls? So perhaps the analysis of a primary intension is going to have to wait until we understand better how we do apriori reasoning, eg how we manage to be rational at all. > ps Hey Rachael, did you get that paper turned in for Goldmann? Erik L. > Now my worry is what the referents are going to be in this case. The > referent either has to be something in the representation or something > in the world the representation is of. For example, consider the > reductive primary intention of water given a representation of Twin > Earth as its input. Either it has to pick out the representation of > XYZ in my Twin Earth representation or it needs to pick XYZ in the > Twin Earth world. > > At the moment I don't have any argument that it is going to have to be > one or the other. But I do have some genernal concerns. In order to > make reference in the actual world work out, the range of the function > better be things in worlds and not representations of them. If the > range of the function is representations then in the actual world my > term "Dave Chalmers" refers to that part of my actual world > representation which represents Dave Chalmers. But we want "Dave > Chalmers" to refer to Dave Chalmers himself, not a representation of > him. > > Now you might suggest that if you found a function from "Dave > Chalmers" to the part of my world representation which represented > Dave Chalmers, then you could simply extend the function by saying > "Dave Chalmers" refers to whatever is represented by the part of my > world representation which is picked out. But this ceases to be a > reductive definition. Now we need to know what the Dave Chalmers part > of the world representation represents. And that is just more > semantics. > > One last point; I think there are two assumptions underlying this > worry. 1) PIs are internal (or supervene on what is in the head). 2) > You (or your thought) doesn't have to be in a world in order to refer > to something in the world. If you ditch both of these then you can > use worlds themselves as the input to even a reductive function. Of > course, this raises other problems. > > Josh > Some advice for drivers on the University of Arizona campus: "When passenger of foot heave in sight, tootle the horn. Trumpet him melodiously at first, but if he still obstacles your passage then tootle him with vigor." --From a brochure of a car rental firm in Tokyo ---------------------- Erik J Larson erikl@U.Arizona.EDU