Javascript Menu by Deluxe-Menu.com
MindPapers is now part of PhilPapers: online research in philosophy, a new service with many more features.
 
 Compiled by David Chalmers (Editor) & David Bourget (Assistant Editor), Australian National University. Submit an entry.
 
   
click here for help on how to search

2.5g. Collective Intentionality (Collective Intentionality on PhilPapers)

See also:
Becchio, Cristina & Bertone, Cesare (2004). Wittgenstein running: Neural mechanisms of collective intentionality and we-mode. Consciousness and Cognition 13 (1):123-133.   (Cited by 2 | Google)
Chant, Sara Rachel & Ernst, Zachary (2007). Group intentions as equilibria. Philosophical Studies 133 (1).   (Google)
Abstract: In this paper, we offer an analysis of ‘group intentions.’ On our proposal, group intentions should be understood as a state of equilibrium among the beliefs of the members of a group. Although the discussion in this paper is non-technical, the equilibrium concept is drawn from the formal theory of interactive epistemology due to Robert Aumann. The goal of this paper is to provide an analysis of group intentions that is informed by important work in economics and formal epistemology
Chant, Sara Rachel (2007). Unintentional collective action. Philosophical Explorations 10 (3):245 – 256.   (Google)
Abstract: In this paper, I examine the manner in which analyses of the action of single agents have been pressed into service for constructing accounts of collective action. Specifically, I argue that the best analogy to collective action is a class of individual action that Carl Ginet has called 'aggregate action.' Furthermore, once we use aggregate action as a model of collective action, then we see that existing accounts of collective action have failed to accommodate an important class of (what I shall call) 'unintentional collective actions.'
Giere, Ronald N. (2004). The problem of agency in scienti?c distributed cognitive systems. Journal of Cognition and Culture 4 (3-4):759-774.   (Google)
Abstract: From the perspective of cognitive science, it is illuminating to think of much contemporary scienti?c research as taking place in distributed cognitive systems. This is particularly true of large-scale experimental and observational systems such as the Hubble Telescope. Clark, Hutchins, Knorr-Cetina, and Latour insist or imply such a move requires expanding our notions of knowledge, mind, and even consciousness. Whether this is correct seems to me not a straightforward factual question. Rather, the issue seems to be how best to develop a theoretical understanding of such systems appropriate to the study of science and technology. I argue that there is no need to attribute to such systems as a whole any form of cognitive agency. We can well understand the importance of such systems while restricting agency to the human components. The implication is that we think of these large-scale distributed cognitive systems not so much as uni?ed wholes, but as hybrid systems including both physical artifacts and ordinary humans
Gureckis, Todd M. & Goldstone, Robert L. (2006). Thinking in groups. Pragmatics and Cognition 14 (2):293-311.   (Cited by 1 | Google | More links)
Harnad, Stevan (2005). Distributed processes, distributed cognizers and collaborative cognition. [Journal (Paginated)] (in Press) 13 (3):01-514.   (Cited by 10 | Google | More links)
Abstract: Cognition is thinking; it feels like something to think, and only those who can feel can think. There are also things that thinkers can do. We know neither how thinkers can think nor how they are able do what they can do. We are waiting for cognitive science to discover how. Cognitive science does this by testing hypotheses about what processes can generate what doing (“know-how”) This is called the Turing Test. It cannot test whether a process can generate feeling, hence thinking -- only whether it can generate doing. The processes that generate thinking and know-how are “distributed” within the heads of thinkers, but not across thinkers’ heads. Hence there is no such thing as distributed cognition, only collaborative cognition. Email and the Web have spawned a new form of collaborative cognition that draws upon individual brains’ real-time interactive potential in ways that were not possible in oral, written or print interactions
Hornsby, Jennifer (1997). Collectives and intentionality. Philosophy and Phenomenological Research 57 (2):429-434.   (Cited by 9 | Google | More links)
Hutchins, Edwin (1995). Cognition in the Wild. MIT Press.   (Cited by 9 | Google | More links)
List, Christian (2003). Distributed cognition: A perspective from social choice theory. In M. Albert, D. Schmidtchen & S Voigt (eds.), Scientific Competition: Theory and Policy, Conferences on New Political Economy. Mohr Siebeck.   (Cited by 1 | Google)
Abstract: Distributed cognition refers to processes which are (i) cognitive and (ii) distributed across multiple agents or devices rather than performed by a single agent. Distributed cognition has attracted interest in several fields ranging from sociology and law to computer science and the philosophy of science. In this paper, I discuss distributed cognition from a social-choice-theoretic perspective. Drawing on models of judgment aggregation, I address two questions. First, how can we model a group of individuals as a distributed cognitive system? Second, can a group acting as a distributed cognitive system be ‘rational’ and ‘track the truth’ in the outputs it produces? I argue that a group’s performance as a distributed cognitive system depends on its ‘aggregation procedure’ – its mechanism for aggregating the group members’ inputs into collective outputs – and I investigate the properties of an aggregation procedure that matter
List, Christian & Pettit, Philip (2006). Group agency and supervenience. Southern Journal of Philosophy 44:85-105.   (Cited by 8 | Google)
Abstract: Can groups be rational agents over and above their individual members? We argue that group agents are distinguished by their capacity to mimic the way in which individual agents act and that this capacity must 'supervene' on the group members' contributions. But what is the nature of this supervenience relation? Focusing on group judgments, we argue that, for a group to be rational, its judgment on a particular proposition cannot generally be a function of the members' individual judgments on that proposition. Rather, it must be a function of their individual sets of judgments across many propositions. So, knowing what the group members individually think about some proposition does not generally tell us how the group collectively adjudicates that proposition: the supervenience relation must be 'set-wise', not 'proposition-wise'. Our account preserves the individualistic view that group agency is nothing mysterious, but also suggests that a group agent may hold judgments that are not directly continuous with its members' corresponding individual judgments
Ludwig, Kirk (2007). Collective intentional behavior from the standpoint of semantics. Noûs 41 (3):355–393.   (Cited by 1 | Google | More links)
Abstract: The mutual dependence of men is so great in all societies that scarce any human action is entirely complete in itself, or is performed without some reference to the actions of others, which are requisite to make it answer fully the intention of the agent
Mathiesen, Kay (2006). The epistemic features of group belief. Episteme 2 (3):161-175.   (Google)
Moffatt, Barton & Giere, Ronald N. (2003). Distributed cognition: Where the cognitive and the social merge. Social Studies of Science 33 (2):301-310.   (Google)
Abstract: Among the many contested boundaries in science studies is that between the cognitive and the social. Here, we are concerned to question this boundary from a perspective within the cognitive sciences based on the notion of distributed cognition. We first present two of many contemporary sources of the notion of distributed cognition, one from the study of artificial neural networks and one from cognitive anthropology. We then proceed to reinterpret two well-known essays by Bruno Latour, ‘Visualization and Cognition: Thinking with Eyes and Hands’ and ‘Circulating Reference: Sampling the Soil in the Amazon Forest’. In both cases we find the cognitive and the social merged in a system of distributed cognition without any appeal to agonistic encounters. For us, results do not come to be regarded as veridical because they are widely accepted; they come to be widely accepted because, in the context of an appropriate distributed cognitive system, their apparent veracity can be made evident to anyone with the capacity to understand the workings of the system
Pettit, Philip (1993). The Common Mind. Oxford University Press.   (Cited by 184 | Google | More links)
Abstract: What makes human beings intentional and thinking subjects? How does their intentionality and thought connect with their social nature and their communal experience? How do the answers to these questions shape the assumptions which it is legitimate to make in social explanation and political evaluation? These are the broad-ranging issues which Pettit addresses in this novel study. The Common Mind argues for an original way of marking off thinking subjects, in particular human beings, from other intentional systems, natural and artificial. It holds by the holistic view that human thought requires communal resources while denying that this social connection compromises the autonomy of individuals. And, in developing the significance of this view of social subjects--this holistic individualism--it outlines a novel framework for social and political theory. Within this framework, social theory is allowed to follow any of a number of paths: space is found for intentional interpretation and decision-theoretic reconstruction, for structural explanation and rational choice derivation. But political theory is treated less ecumenically. The framework raises serious questions about contractarian and atomistic modes of thought and it points the way to a republican rethinking of liberal commitments
Poirier, Pierre & Chicoisne, Guillaume (2006). A framework for thinking about distributed cognition. Pragmatics and Cognition 14 (2):215-234.   (Cited by 3 | Google | More links)
Rakoczy, Hannes (2008). Pretence as individual and collective intentionality. Mind and Language 23 (5):499-517.   (Google)
Abstract: Abstract:  Focusing on early child pretend play from the perspective of developmental psychology, this article puts forward and presents evidence for two claims. First, such play constitutes an area of remarkable individual intentionality of second-order intentionality (or 'theory of mind'): in pretence with others, young children grasp the basic intentional structure of pretending as a non-serious fictional form of action. Second, early social pretend play embodies shared or collective we-intentionality. Pretending with others is one of the ontogenetically primary instances of truly cooperative actions. And it is a, perhaps the, primordial form of cooperative action with rudimentary rule-governed, institutional structure: in joint pretence games, children are aware that objects collectively get assigned fictional status, 'count as' something, and that this creates a normative space of warranted moves in the game. Developmentally, pretend play might even be a cradle for institutional phenomena more generally
Rupert, Robert D. (2005). Minding one's cognitive systems: When does a group of minds constitute a single cognitive unit? Episteme 1 (3):177-188.   (Cited by 2 | Google)
Abstract: The possibility of group minds or group mental states has been considered by a number of authors addressing issues in social epistemology and related areas (Goldman 2004, Pettit 2003, Gilbert 2004, Hutchins 1995). An appeal to group minds might, in the end, do indispensable explanatory work in the social or cognitive sciences. I am skeptical, though, and this essay lays out some of the reasons for my skepticism. The concerns raised herein constitute challenges to the advocates of group minds (or group mental states), challenges that might be overcome as theoretical and empirical work proceeds. Nevertheless, these hurdles are, I think, genuine and substantive, so much so that my tentative conclusion will not be optimistic. If a group mind is supposed to be a single mental system having two or more minds as proper parts,1 the prospects for group minds seem dim
Saaristo, Antti (2006). There is no escape from philosophy: Collective intentionality and empirical social science. Philosophy of the Social Sciences 36 (1):40-66.   (Google | More links)
Abstract: This article examines two empirical research traditions—experimental economics and the social identity approach in social psychology—that may be seen as attempts to falsify and verify the theory of collective intentionality, respectively. The article argues that both approaches fail to settle the issue. However, this is not necessarily due to the alleged immaturity of the social sciences but, possibly, to the philosophical nature of intentionality and intentional action. The article shows how broadly Davidsonian action theory, including Hacking’s notion of the looping effect of the human sciences, can be developed into an argument for the view that there is no theory-independent true nature of intentional action. If the Davidsonian line of thought is correct, the theory of collective intentionality is, in a sense, true if we accept the theory. Key Words: collective intentionality • experimental economics • social identity theory • Donald Davidson • Ian Hacking • constructivism • action • agency • philosophy of the social sciences
Schmid, Hans B. (2003). Can brains in vats think as a team? Philosophical Explorations 6 (3):201-218.   (Cited by 3 | Google | More links)
Tollefsen, Deborah (online). Collective intentionality. Internet Encyclopedia of Philosophy.   (Google)
Tollefsen, Deborah Perron (2002). Collective intentionality and the social sciences. Philosophy of the Social Sciences 32 (1).   (Google)
Abstract: In everyday discourse and in the context of social scientific research we often attribute intentional states to groups. Contemporary approaches to group intentionality have either dismissed these attributions as metaphorical or provided an analysis of our attributions in terms of the intentional states of individuals in the group.Insection1, the author argues that these approaches are problematic. In sections 2 and 3, the author defends the view that certain groups are literally intentional agents. In section 4, the author argues that there are significant reasons for social scientists and philosophers of social science to acknowledge the adequacy of macro-level explanations that involve the attribution of intentional states to groups. In section 5, the author considers and responds to some criticisms of the thesis she defends
Tomasello, Michael & Rakoczy, Hannes (2003). What makes human cognition unique? From individual to shared to collective intentionality. Mind and Language 18 (2):121-147.   (Cited by 54 | Google | More links)
Tuomela, Raimo (online). Collective intentionality and social agents.   (Cited by 4 | Google)
Abstract: In this paper I will discuss a certain philosophical and conceptual program -- that I have called philosophy of social action writ large -- and also show in detail how parts of the program have been, and is currently being carried out. In current philosophical research the philosophy of social action can be understood in a broad sense to encompass such central research topics as action occurring in a social context (this includes multi-agent action); shared we-attitudes (such as we-intention, mutual belief) and other social attitudes expressing collective intentionality and needed for the explication and explanation of social action; social macro-notions, such as actions performed by social groups and properties of social groups such as their goals and beliefs; social practices, and institutions (see e.g. Tuomela, 1995, 2000a, 2001). The theory of social action understood analogously in a broad sense would then involve not only philosophical but all other relevant theorizing about social action. Thus, in this sense, such fields of Artificial Intelligence (AI) as Distributed AI (DAI) and the theory of Multi-Agent Systems (MAS) fall within the scope of the theory of social action. DAI studies the social side of computer systems and includes various well-known areas ranging from human-computer interaction, computer-supported cooperative work, organizational processing, and distributed problem solving to the simulation of social systems
Tuomela, Raimo (1996). Philosophy and distributed artificial intelligence: The case of joint intention. In N. Jennings & G. O'Hare (eds.), Foundations of Distributed Artificial Intelligence. Wiley.   (Cited by 16 | Google | More links)
Abstract: In current philosophical research the term 'philosophy of social action' can be used - and has been used - in a broad sense to encompass the following central research topics: 1) action occurring in a social context; this includes multi-agent action; 2) joint attitudes (or "we-attitudes" such as joint intention, mutual belief) and other social attitudes needed for the explication and explanation of social action; 3) social macro-notions, such as actions performed by social groups and properties of social groups such as their goals and beliefs; 4) social norms and social institutions (see Tuomela, 1984, 1995). The theory of social action understood analogously in a broad sense would then involve not only philosophical but all other relevant theorizing about social action. Thus, in this sense, such fields of Artificial Intelligence (AI) as Distributed AI (DAI) and the theory of Multi-Agent Systems (MAS) fall within the scope of the theory of social action. DAI studies the social side of computer systems and includes various well-known areas ranging from Human Computer Interaction, Computer-Supported Cooperative Work, Organizational Processing, Distributed Problem Solving to Simulation of Social Systems and Organizations. Even if I am a philosopher with low artificial intelligence I will below try to say something about what the scope of DAI should be taken to be on conceptual and philosophical grounds. (In the later sections of the paper the central notion of joint intention will be the main topic - in order to illustrate how philosophers and DAI-researchers approach this issue.) Let us now consider the relationship between philosophy - especially philosophy of social action - and DAI. Both are concerned with social matters and in this sense seem to have a connection to social science proper. What kinds of questions should these areas of study be concerned with? In principle, ordinary social science should study all aspects of social life (in various societies and cultures), try to describe it and create general theories to explain it.
Vromen, Jack J. (2003). Collective intentionality, evolutionary biology and social reality. Philosophical Explorations 6 (3):251-265.   (Google | More links)
Abstract: The paper aims to clarify and scrutinize Searle"s somewhat puzzling statement that collective intentionality is a biologically primitive phenomenon. It is argued that the statement is not only meant to bring out that "collective intentionality" is not further analyzable in terms of individual intentionality. It also is meant to convey that we have a biologically evolved innate capacity for collective intentionality.The paper points out that Searle"s dedication to a strong notion of collective intentionality considerably delimits the scope of his endeavor. Furthermore, evolutionary theory does not vindicate that an innate capacity for collective intentionality is a necessary precondition for cooperative behavior. 1
Wilson, Robert A. (2001). Group-level cognition. Philosophy of Science 3 (September):S262-S273.   (Cited by 9 | Google | More links)
Zhang, Jiajie & Patel, Vimla L. (2006). Distributed cognition, representation, and affordance. Pragmatics and Cognition 14 (2):333-341.   (Cited by 6 | Google | More links)