This mode searches for entries containing all the entered words in their title, author, date, comment field, or in any of many other fields showing on OPC pages.
This mode searches for entries containing the text string you entered in their author field. Note that the database does not have first names for all authors, so it is preferable to search only by surnames. If you search for a full name or a name with an initial, enter it in the format used internally, namely the "Lastname, Firstname" or "Lastname, F." format.
This mode differs from the all fields mode in two respects. First, some information not publicly available on the site is searched, e.g., abstracts and excerpts gathered by the crawler, which are not always accurate but can help broaden one's search. Second, you may prefix any term with a '+' or '-' to narrow the search to entries containing it or not containing it, respectively. Terms which are not prefixed by a '+' are not mandatory. Instead, they are weighed depending on their frequency in order to determine the best search results. You may also search for a literal string composed of several words by putting them in double quotation marks (").
Note that short and / or common words are ignored by the search engine.
Try PhilPapers to find published items which are available on a subscription basis.
Abstract: The dynamics of neuronal systems, briefly neurodynamics, has developed into an attractive and influential research branch within neuroscience. In this paper, we discuss a number of conceptual issues in neurodynamics that are important for an appropriate interpretation and evaluation of its results. We demonstrate their relevance for selected topics of theoretical and empirical work. In particular, we refer to the notions of determinacy and stochasticity in neurodynamics across levels of microscopic, mesoscopic and macroscopic descriptions. The issue of correlations between neural, mental and behavioral states is also addressed in some detail. We propose an informed discussion of conceptual foundations with respect to neurobiological results as a viable step to a fruitful future philosophy of neuroscience
Abstract: Stable neuronal assemblies are generally regarded as neural correlates of mental representations. Their temporal sequence corresponds to the experience of a direction of time, sometimes called the psychological time arrow. We show that the stability of particular, biophysically motivated models of neuronal assemblies, called coupled map lattices, is supported by causal interactions among neurons and obstructed by non-causal or anti-causal interactions among neurons. This surprising relation between causality and stability suggests that those neuronal assemblies that are stable due to causal neuronal interactions, and thus correlated with mental representations, generate a psychological time arrow. Yet this impact of causal interactions among neurons on the directed sequence of mental representations does not rule out the possibility of mentally less efficacious non-causal or anti-causal interactions among neurons
Abstract: Philosophers have been talking about brain states for almost 50 years and as of yet no one has articulated a theoretical account of what one is. In fact this issue has received almost no attention and cognitive scientists still use meaningless phrases like 'C-fiber firing' and 'neuronal activity' when theorizing about the relation of the mind to the brain. To date when theorists do discuss brain states they usually do so in the context of making some other argument with the result being that any discussion of what brain states are has a distinct en passant flavor. In light of this it is a goal of mine to make brain states the center of attention by providing some general discussion of them. I briefly look at the argument of Bechtel and Mundale, as I think that they expose a common misconception philosophers had about brain states early on. I then turn to briefly examining Polger's argument, as I think he offers an intuitive account of what we expect brain states to be as well as a convincing argument against a common candidate for knowledge about brain states that is currently "on the scene." I then introduce a distinction between brain states and states of the brain: Particular brain states occur against background states of the brain. I argue that brain states are patterns of synchronous neural firing, which reflects the electrical face of the brain; states of the brain are the gating and modulating of neural activity and reflect the chemical face of the brain
Abstract: I defend a theory of mental representation that satisfies naturalistic constraints. Briefly, we begin by distinguishing (i) what makes something a representation from (ii) given that a thing is a representation, what determines what it represents. Representations are states of biological organisms, so we should expect a unified theoretical framework for explaining both what it is to be a representation as well as what it is to be a heart or a kidney. I follow Millikan in explaining (i) in terms of teleofunction, explicated in terms of natural selection.
To explain (ii), we begin by recognizing that representational states do not have content, that is, they are neither true nor false except insofar as they both “point to” or “refer” to something, as well as “say” something regarding whatever it is they are about. To distinguish veridical from false representations, there must be a way for these separate aspects to come apart; hence, we explain (ii) by providing independent theories of what I call f-reference and f-predication (the ‘f’ simply connotes ‘fundamental’, to distinguish these things from their natural language counterparts).
Causal theories of representation typically founder on error, or on what Fodor has called the disjunction problem. Resemblance or isomorphism theories typically founder on what I’ve called the non-uniqueness problem, which is that isomorphisms and resemblance are practically unconstrained and so representational content cannot be uniquely determined. These traditional problems provide the motivation for my theory, the structural preservation theory, as follows. F-reference, like reference, is a specific, asymmetric relation, as is causation. F-predication, like predication, is a non-specific relation, as predicates typically apply to many things, just as many relational systems can be isomorphic to any given relational system. Putting these observations together, a promising strategy is to explain f-reference via causal history and f-predication via something like isomorphism between relational systems.
This dissertation should be conceptualized as having three parts. After motivating and characterizing the problem in chapter 1, the first part is the negative project, where I review and critique Dretske’s, Fodor’s, and Millikan’s theories in chapters 2-4. Second, I construct my theory about the nature of representation in chapter 5 and defend it from objections in chapter 6. In chapters 7-8, which constitute the third and final part, I address the question of how representation is implemented in biological systems. In chapter 7 I argue that single-cell intracortical recordings taken from awake Macaque monkeys performing a cognitive task provide empirical evidence for structural preservation theory, and in chapter 8 I use the empirical results to illustrate, clarify, and refine the theory.
Abstract: Questions concerning the nature of representation and what representations are about have been a staple of Western philosophy since Aristotle. Recently, these same questions have begun to concern neuroscientists, who have developed new techniques and theories for understanding how the locus of neurobiological representation, the brain, operates. My dissertation draws on philosophy and neuroscience to develop a novel theory of representational content
Abstract: The first use of the term “information” to describe the content of nervous impulse occurs in Edgar Adrian's The Basis of Sensation (1928). What concept of information does Adrian appeal to, and how can it be situated in relation to contemporary philosophical accounts of the notion of information in biology? The answer requires an explication of Adrian's use and an evaluation of its situation in relation to contemporary accounts of semantic information. I suggest that Adrian's concept of information can be to derive a concept of arbitrariness or semioticity in representation. This in turn provides one way of resolving some of the challenges that confront recent attempts in the philosophy of biology to restrict the notion of information to those causal connections that can in some sense be referred to as arbitrary or semiotic
Abstract: The first use of the term "information" to describe the content of nervous impulse occurs 20 years prior to Shannon`s (1948) work, in Edgar Adrian`s The Basis of Sensation (1928). Although, at least throughout the 1920s and early 30s, the term "information" does not appear in Adrian`s scientific writings to describe the content of nervous impulse, the notion that the structure of nervous impulse constitutes a type of message subject to certain constraints plays an important role in all of his writings throughout the period. The appearance of the concept of information in Adrian`s work raises at least two important questions: (i) what were the relevant factors that motivated Adrian`s use of the concept of information? (ii) What concept of information does Adrian appeal to, and how can it be situated in relation to contemporary philosophical accounts of the notion of information in biology? The first question involves an account of the application of communications technology in neurobiology as well as the historical and scientific background of Adrian`s major scientific achievement, which was the recording of the action potential of a single sensory neuron. The response to the second question involves an explication of Adrian`s concept of information and an evaluation of how it may be situated in relation to more contemporary philosophical explications of a semantic concept of information. I suggest that Adrian`s concept of information places limitations on the sorts of systems that are referred to as information carriers by causal and functional accounts of information
Abstract: I argue against a growing radical trend in current theoretical cognitive science that moves from the premises of embedded cognition, embodied cognition, dynamical systems theory and/or situated robotics to conclusions either to the effect that the mind is not in the brain or that cognition does not require representation, or both. I unearth the considerations at the foundation of this view: Haugeland's bandwidth-component argument to the effect that the brain is not a component in cognitive activity, and arguments inspired by dynamical systems theory and situated robotics to the effect that cognitive activity does not involve representations. Both of these strands depend not only on a shift of emphasis from higher cognitive functions to things like sensorimotor processes, but also depend on a certain understanding of how sensorimotor processes are implemented - as closed-loop control systems. I describe a much more sophisticated model of sensorimotor processing that is not only more powerful and robust than simple closed-loop control, but for which there is great evidence that it is implemented in the nervous system. The is the emulation theory of representation, according to which the brain constructs inner dynamical models, or emulators, of the body and environment which are used in parallel with the body and environment to enhance motor control and perception and to provide faster feedback during motor processes, and can be run off-line to produce imagery and evaluate sensorimotor counterfactuals. I then show that the emulation framework is immune to the radical arguments, and makes apparent why the brain is a component in the cognitive activity, and exactly what the representations are in sensorimotor control
Abstract: Attentional selection biases the processing of higher visual areas to particular parts of a scene. Recent experiments show how stimulation of neurons in the frontal eye fields can mimic this process.
Abstract: Often, sensory input underdetermines perception. One such example is the perception of illusory contours. In illusory contour perception, the content of the percept includes the presence of a contour that is absent from the informational content of the sensation. (By “sensation” I mean merely information-bearing events at the transducer level. I intend no further commitment such as the identification of sensations with qualia.) I call instances of perception underdetermined by sensation “underdetermined perception.” The perception of illusory contours is just one kind of underdetermined perception. The focus of this chapter is another kind of underdetermined perception: what I shall call "active perception". Active perception occurs in cases in which the percept, while underdetermined by sensation, is determined by a combination of sensation and action. The phenomenon of active perception has been used by several to argue against the positing of representations in explanations of sensory experience, either by arguing that no representations need be posited or that far fewer than previously thought need be posited. Such views include, but are not limited to those of Gibson (1966, 1986), Churchland
Abstract: We explicate representational content by addressing how representations that ex- plain intelligent behavior might be acquired through processes of Darwinian evo- lution. We present the results of computer simulations of evolved neural network controllers and discuss the similarity of the simulations to real-world examples of neural network control of animal behavior. We argue that focusing on the simplest cases of evolved intelligent behavior, in both simulated and real organisms, reveals that evolved representations must carry information about the creature’s environ- ments and further can do so only if their neural states are appropriately isomor- phic to environmental states. Further, these informational and isomorphism rela- tions are what are tracked by content attributions in folk-psychological and cognitive scientific explanations of these intelligent behaviors
Abstract: Computation and philosophy intersect three times in this essay. Computation is considered as an object, as a method, and as a model used in a certain line of philosophical inquiry concerning the relation of mind to matter. As object, the question considered is whether computation and related notions of mental representation constitute the best ways to conceive of how physical systems give rise to mental properties. As method and model, the computational techniques of artificial life and embodied evolutionary connectionism are used to conduct prosthetically enhanced thought experiments concerning the evolvability of mental representations. Central to this essay is a discussion of the computer simulation and evolution of three-dimensional synthetic animals with neural network controllers. The minimally cognitive behavior of finding food by exhibit- ing positive chemotaxis is simulated with swimming and walking creatures. These simulations form the basis of a discussion of the evolutionary and neurocomputa- tional bases of the incremental emergence of more complex forms of cognition. Other related work has been used to attack computational and representational theories of cognition. In contrast, I argue that the proper understanding of the evolutionary emergence of minimally cognitive behaviors is computational and representational through and through
Abstract: In this paper I discuss one of the key issuesin the philosophy of neuroscience:neurosemantics. The project of neurosemanticsinvolves explaining what it means for states ofneurons and neural systems to haverepresentational contents. Neurosemantics thusinvolves issues of common concern between thephilosophy of neuroscience and philosophy ofmind. I discuss a problem that arises foraccounts of representational content that Icall ``the economy problem'': the problem ofshowing that a candidate theory of mentalrepresentation can bear the work requiredwithin in the causal economy of a mind and anorganism. My approach in the current paper isto explore this and other key themes inneurosemantics through the use of computermodels of neural networks embodied and evolvedin virtual organisms. The models allow for thelaying bare of the causal economies of entireyet simple artificial organisms so that therelations between the neural bases of, forinstance, representation in perception andmemory can be regarded in the context of anentire organism. On the basis of thesesimulations, I argue for an account ofneurosemantics adequate for the solution of theeconomy problem
Abstract: The ability to predict is the most importantability of the brain. Somehow, the cortex isable to extract regularities from theenvironment and use those regularities as abasis for prediction. This is a most remarkableskill, considering that behaviourallysignificant environmental regularities are noteasy to discern: they operate not only betweenpairs of simple environmental conditions, astraditional associationism has assumed, butamong complex functions of conditions that areorders of complexity removed from raw sensoryinputs. We propose that the brain's basicmechanism for discovering such complexregularities is implemented in the dendritictrees of individual pyramidal cells in thecerebral cortex. Pyramidal cells have 5–8principal dendrites, each of which is capableof learning nonlinear input-to-outputtransfer functions. We propose that eachdendrite is trained, in learning its transferfunction, by all the other principal dendritesof the same cell. These dendrites teach eachother to respond to their separate inputs with matching outputs. Exposed to differentbut related information about the sensoryenvironment, principal dendrites of the samecell tune to functions over environmentalconditions that, while different, are correlated . As a result, the cell as awhole tunes to the source of the regularitiesdiscovered by the cooperating dendrites,creating a new representation. When organizedinto feed-forward/feedback layers, pyramidalcells can build their discoveries on thediscoveries of other cells, graduallyuncovering nature's hidden order. Theresulting associative network is powerfulenough to meet a troubling traditionalobjection to associationism: that it is toosimple an architecture to implement rationalprocesses