Javascript Menu by
MindPapers is now part of PhilPapers: online research in philosophy, a new service with many more features.
 Compiled by David Chalmers (Editor) & David Bourget (Assistant Editor), Australian National University. Submit an entry.
click here for help on how to search

8.8a. Cognitive Models of Consciousness (Cognitive Models of Consciousness on PhilPapers)

See also:
Aleksander, Igor & Morton, Helen (2007). Depictive architectures for synthetic phenomenology. In Antonio Chella & Riccardo Manzotti (eds.), Artificial Consciousness. Imprint Academic.   (Google)
Aleksander, Igor L. (2007). Why axiomatic models of being conscious? Journal of Consciousness Studies 14 (7):15-27.   (Google | More links)
Abstract: This paper looks closely at previously enunciated axioms that specifically include phenomenology as the sense of a self in a perceptual world. This, we suggest, is an appropriate way of doing science on a first-person phenomenon. The axioms break consciousness down into five key components: presence, imagination, attention, volition and emotions. The paper examines anew the mechanism of each and how they interact to give a single sensation. An abstract architecture, the Kernel Architecture, is introduced as a starting point for building computational models. The thrust of the paper is to relate the axioms to the kernel architecture and indicate that this opens a way of discussing some first-person issues: tests for consciousness, animal consciousness and Higher Order Thought
Baars, Bernard J. (1988). A Cognitive Theory of Consciousness. Cambridge University Press.   (Cited by 953 | Google | More links)
Abstract: Conscious experience is one of the most difficult and thorny problems in psychological science. Its study has been neglected for many years, either because it was thought to be too difficult, or because the relevant evidence was thought to be poor. Bernard Baars suggests a way to specify empirical constraints on a theory of consciousness by contrasting well-established conscious phenomena - such as stimulus representations known to be attended, perceptual, and informative - with closely comparable unconscious ones - such as stimulus representations known to be preperceptual, unattended, or habituated. Adducing data to show that consciousness is associated with a kind of global workplace in the nervous system, and that several brain structures are known to behave in accordance with his theory, Baars helps to clarify many difficult problems
Baars, Bernard J.; Ramsoy, Thomas Zoega & Laureys, Steven (2003). Brain, conscious experience, and the observing self. Trends in Neurosciences 26 (12):671-5.   (Cited by 58 | Google | More links)
Baars, Bernard J.; Fehling, M. R.; LaPolla, M. & McGovern, Katharine A. (1997). Consciousness creates access: Conscious goal images recruit unconscious action routines, but goal competition serves to "liberate" such routines, causing predictable slips. In Jonathan D. Cohen & Jonathan W. Schooler (eds.), Scientific Approaches to Consciousness. Lawrence Erlbaum.   (Google)
Baars, Bernard J. (1983). Conscious contents provide the nervous system with coherent, global information. In Richard J. Davidson, Gary E. Schwartz & D. H. Shapiro (eds.), Consciousness and Self-Regulation. Plenum.   (Cited by 31 | Google)
Baars, Bernard J. & McGovern, Katharine A. (1996). Cognitive views of consciousness: What are the facts? How can we explain them? In Max Velmans (ed.), The Science of Consciousness. Routledge.   (Cited by 16 | Google | More links)
Abstract: At this instant you, the reader, are conscious of some aspects of the act of reading --- the color and texture of THIS PAGE, and perhaps the inner sound of THESE WORDS. But you are probably not aware of the touch of your chair at this instant; nor of a certain background taste in your mouth, nor that monotonous background noise, the soft sound of music, or the complex syntactic processes needed to understand THIS PHRASE; nor are you now aware of your feelings about a friend, the fleeting events of several seconds ago, or the multiple meanings of ambiguous words, as in THIS CASE. Even though you are not currently conscious of them, there is good of evidence that such unconscious events are actively processed in your brain, every moment you are awake. When we try to understand conscious experience we aim to explain the differences between these two conditions: between the events in your nervous system that you can report, act upon, distinguish, and acknowledge as your own, and a great multitude of sophisticated and intelligent processes which are unconscious, and do not allow these operations
Baars, Bernard J. (2006). Global workspace theory of consciousness: Toward a cognitive neuroscience of human experience? In Steven Laureys (ed.), Boundaries of Consciousness. Elsevier.   (Cited by 13 | Google)
Baars, Bernard J. (1997). In the Theater of Consciousness: The Workspace of the Mind. Oxford University Press.   (Cited by 2 | Google | More links)
Abstract: The study of conscious experience has seen remarkable strides in the last ten years, reflecting important technological breakthroughs and the enormous efforts of researchers in disciplines as varied as neuroscience, cognitive science, and philosophy. Although still embroiled in debate, scientists are now beginning to find common ground in their understanding of consciousness, which may pave the way for a unified explanation of how and why we experience and understand the world around us. Written by eminent psychologist Bernard J. Baars, Inside the Theater of Consciousness: The Workspace of the Mind brings us to the frontlines of this exciting discipline, offering the general reader a fascinating overview of how top scientists currently understand the processes underlying conscious experience. Combining psychology with brain science, Baars brilliantly brings his subject to life with a metaphor that has been used to understand consciousness since the time of Aristotle--the mind as theater. Here consciousness is seen as a "stage" on which our sensations, perceptions, thoughts, and feelings play to a vast, silent audience (the immensely complicated inner-workings of the brain's unconscious processes). Behind the scenes, silent context operators shape conscious experience; they include implicit expectations, self systems, and scene setters. Using this framework, Baars presents compelling evidence that human consciousness rides on top of biologically ancient mechanisms. In humans it manifests itself in inner speech, imagery, perception, and voluntary control of thought and action. Topics like hypnosis, absorbed states of mind, adaptation to trauma, and the human propensity to project expectations on uncertainty, all fit into the expanded theater metaphor. As Baars explores our present understanding of the mind, he takes us to the top laboratories around the world, where we witness some of the field's most exciting breakthroughs and discoveries. (For instance, Baars recounts one extraordinary sequence of experiments, in which state-of-the-art PET scans--reproduced here in full color--capture in fascinating, graphic detail how brain activity changes as people learn how to play the computer game Tetris.) And throughout the book, Baars has sprinkled numerous and often highly amusing on-the-spot demonstrations that illuminate the ideas under discussion. Understanding consciousness is perhaps the most difficult puzzle facing the sciences today. In the Theater of Consciousness offers an invaluable introduction to the field, brilliantly weaving together the various theories that have emerged as scientists continue their quest to uncover the profound mysteries of the mind--and of human nature itself
Baars, Bernard J. (1997). In the theatre of consciousness: Global workspace theory, a rigorous scientific theory of consciousness. Journal of Consciousness Studies 4 (4):292-309.   (Cited by 21 | Google)
Baars, Bernard J. (1998). Metaphors of consciousness and attention in the brain. Trends in Neurosciences 21:58-62.   (Cited by 31 | Google | More links)
Baars, Bernard J. (2002). The conscious access hypothesis: Origins and recent evidence. Trends in Cognitive Sciences 6 (1):47-52.   (Cited by 88 | Google | More links)
Baars, Bernard J. (2007). The global workspace theory of consciousness. In Max Velmans & Susan Schneider (eds.), The Blackwell Companion to Consciousness. Blackwell.   (Cited by 13 | Google)
Barresi, John & Christie, John R. (2002). Consciousness and information processing: A reply to durgin. Consciousness and Cognition 11 (2):372-374.   (Google)
Abstract: Durgin's (2002) commentary on our article provides us with an opportunity to look more closely at the relationship between information processing and consciousness. In our article we contrasted the information processing approach to interpreting our data, with our own 'scientific' approach to consciousness. However, we should point out that, on our view, information processing as a methodology is not by itself in conflict with the scientific study of consciousness - indeed, we have adopted this very methodology in our experiments, which we purport to use to investigate consciousness. Furthermore, Durgin's own review of the history of research on metacontrast (Lachter & Durgin, 1999) shows that some researchers investigating metacontrast also thought that they were in the business of evaluating the role of consciousness in accounting for their effects. Yet, there is no doubt that metacontrast research is a paradigm case of research generated from an information processing perspective. So, prima facie, investigating consciousness and using information processing methodology are compatible
Bechtel, William P. (1995). Consciousness: Perspectives from symbolic and connectionist AI. Neuropsychologia.   (Cited by 3 | Google | More links)
Browne, C.; Evans, Robert W.; Sales, N. & Aleksander, Igor L. (1997). Consciousness and neural cognizers: A review of some recent approaches. Neural Networks 10:1303-1316.   (Cited by 2 | Google | More links)
Brown, R. A. (1997). Consciousness in a self-learning, memory-controlled, compound machine. Neural Networks 10:1333-85.   (Google | More links)
Burks, Arthur W. (1986). An architectural theory of functional consciousness. In Nicholas Rescher (ed.), Current Issues in Teleology. University Press of America.   (Cited by 1 | Google)
Cabanac, M. (1996). On the origin of consciousness, a postulate, and its corollary. Neuroscience and Biobehavioral Reviews 20:33-40.   (Cited by 12 | Google | More links)
Calvin, William H. (1998). Competing for consciousness: A Darwinian mechanism at an appropriate level of explanation. Journal of Consciousness Studies 5 (4):389-404.   (Google | More links)
Abstract: Treating consciousness as awareness or attention greatly underestimates it, ignoring the temporary levels of organization associated with higher intellectual function (syntax, planning, logic, music). The tasks that require consciousness tend to be the ones that demand a lot of resources. Routine tasks can be handled on the back burner but dealing with ambiguity, groping around offline, generating creative choices, and performing precision movements may temporarily require substantial allocations of neocortex. Here I will attempt to clarify the appropriate levels of explanation (ranging from quantum aspects to association cortex dynamics) and then propose a specific mechanism (consciousness as the current winner of Darwinian copying competitions in cerebral cortex) that seems capable of encompassing the higher intellectual function aspects of consciousness as well as some of the attentional aspects. It includes features such as a coding space appropriate for analogies and a supervisory Darwinian process that can bias the operation of other Darwinian processes
Cam, Philip (1989). Notes toward a faculty theory of cognitive consciousness. In Peter Slezak (ed.), Computers, Brains and Minds. Kluwer.   (Google)
Carr, T. H. (1979). Consciousness in models of human information processing: Primary memory, executive control, and input regulation. In G. Underwood & R. Stevens (eds.), Aspects of Consciousness, Volume 1. Academic Press.   (Cited by 9 | Google)
Cardaci, Maurizio; D'Amico, Antonella & Caci, Barbara (2007). The social cognitive theory: A new framework for implementing artificial consciousness. In Antonio Chella & Riccardo Manzotti (eds.), Artificial Consciousness. Imprint Academic.   (Google)
Chang, Fu (online). A theory of consciousness.   (Google | More links)
Claxton, Guy (1996). Structure, strategy and self in the fabrication of conscious experience. Journal of Consciousness Studies 3 (2):98-111.   (Cited by 5 | Google)
Cook, N. D. (1999). Simulating consciousness in a bilateral neural network: ''Nuclear'' and ''fringe'' awareness. Consciousness and Cognition 8 (1):62-93.   (Cited by 10 | Google | More links)
Abstract: A technique for the bilateral activation of neural nets that leads to a functional asymmetry of two simulated ''cerebral hemispheres'' is described. The simulation is designed to perform object recognition, while exhibiting characteristics typical of human consciousness-specifically, the unitary nature of conscious attention, together with a dual awareness corresponding to the ''nucleus'' and ''fringe'' described by William James (1890). Sensory neural nets self-organize on the basis of five sensory features. The system is then taught arbitrary symbolic labels for a small number of similar stimuli. Finally, the trained network is exposed to nonverbal stimuli for object recognition, leading to Gaussian activation of the ''sensory'' maps-with a peak at the location most closely related to the features of the external stimulus. ''Verbal'' maps are activated most strongly at the labeled location that lies closest to the peak on homologous sensory maps. On the verbal maps activation is characterized by both excitatory and inhibitory Gaussians (a Mexican hat), the parameters of which are determined by the relative locations of the verbal labels. Mutual homotopic inhibition across the ''corpus callosum'' then produces functional cerebral asymmetries, i.e., complementary activation of homologous ''association'' and ''frontal'' maps within a common focus of attention-a nucleus in the left hemisphere and a fringe in the right hemisphere. An object is recognized as corresponding to a known label when the total activation of both hemispheres (nucleus plus fringe) is strongest for that label. The functional dualities of the cerebral hemispheres are discussed in light of the nucleus/fringe asymmetry
Cotterill, Rodney M. J. (1997). Navigation, consciousness and the body/mind "problem". Psyke and Logos 18:337-341.   (Cited by 2 | Google)
Cotterill, Rodney M. J. (1997). On the mechanism of consciousness. Journal of Consciousness Studies 4 (3):231-48.   (Cited by 15 | Google)
Cotterill, Rodney M. J. (1996). Prediction and internal feedback in conscious perception. Journal of Consciousness Studies 3:245-66.   (Google)
Coward, L. Andrew & Sun, Ron (2004). Criteria for an effective theory of consciousness and some preliminary attempts. Consciousness and Cognition 13 (2):268-301.   (Cited by 1 | Google | More links)
Abstract: In the physical sciences a rigorous theory is a hierarchy of descriptions in which causal relationships between many general types of entity at a phenomenological level can be derived from causal relationships between smaller numbers of simpler entities at more detailed levels. The hierarchy of descriptions resembles the modular hierarchy created in electronic systems in order to be able to modify a complex functionality without excessive side effects. Such a hierarchy would make it possible to establish a rigorous scientific theory of consciousness. The causal relationships implicit in definitions of access consciousness and phe- nomenal consciousness are made explicit, and the corresponding causal relationships at the more detailed levels of perception, memory, and skill learning described. Extension of these causal relationships to physiological and neural levels is discussed. The general capability of a range of current consciousness models to support a modular hierarchy which could generate these causal relationships is reviewed, and the specific capabilities of two models with good general capabilities are compared in some detail. Ó 2003 Elsevier Inc. All rights reserved
Coward, L. Andrew & Sun, Ron (2002). Explaining consciousness at multiple levels. In Serge P. Shohov (ed.), Advances in Psychology Research. Nova Science Publishers.   (Google)
Díaz, José-Luis (1997). A patterned process approach to brain, consciousness, and behavior. Philosophical Psychology 10 (2):179-195.   (Google)
Abstract: The architecture of brain, consciousness, and behavioral processes is shown to be formally similar in that all three may be conceived and depicted as Petri net patterned processes structured by a series of elements occurring or becoming active in stochastic succession, in parallel, with different rhythms of temporal iteration, and with a distinct qualitative manifestation in the spatiotemporal domain. A patterned process theory is derived from the isomorphic features of the models and contrasted with connectionist, dynamic system notions. This empirically derived formulation is considered to be optimally compatible with the dual aspect theory in that the foundation of the diverse aspects would be a highly structured and dynamic process, the psychophysical neutral “ground” of mind and matter posed (but not properly determined) by dual aspect and neutral monist theories. It is methodologically sound to approach each one of these processes with specific tools and to establish concurrences in real time between them at the organismic level of analysis. Such intra-level and inter-perspective correlations could eventually constitute psychophysical bridge-laws. A mature psychology of consciousness is necessary to situate and verify the bridges required by a genuine mind-body science
Dehaene, Stanislas; Kerszberg, Michel & Changeux, Jean-Pierre (2001). A neuronal model of a global workspace in effortful cognitive tasks. Pnas 95 (24):14529-14534.   (Cited by 140 | Google | More links)
Dennett, D. C. & Westbury, C. F. (1999). Stability is not intrinsic. Behavioral and Brain Sciences 22 (1):153-154.   (Google | More links)
Abstract: A pure vehicle theory of the contents of consciousness is not possible. While it is true that hard-wired tacit representations are insufficient as content-vehicles, not all tacit representations are hard-wired. The definition of stability offered for patterns of neural activation is not well-motivated, and too simplistic. We disagree in particular with the assumption that stability within a network is purely intrinsic to that network. Many complex forms of stability within a network are apparent only when interpreted by something external to that network. The requirement for interpretation introduces a necessary functional element into the theory of the contents of consciousness, suggesting that a pure vehicle theory of those contents will not succeed
Dorrell, Philip (ms). Computation vs. feelings and the production/judgment model.   (Google)
Abstract: Functional versus Subjective Consciousness The Example of Pain Dieting and Free Will The Production/Judgement Model Judgement is not Reward Feelings are Judgements Low-Bandwidth Channels Candidate Neural Control Channels Timing of Intention and Action Conclusion References Abstract
d'Ydewalle, Géry (2000). The case against a single consciousness center: Much ado about nothing? European Psychologist 5 (1):12-13.   (Google)
Franklin, Stan (online). Action selection and language generation in "conscious" software agents.   (Cited by 4 | Google | More links)
Franklin, Stan & Graesser, Art (1999). A software agent model of consciousness. Consciousness And Cognition 8 (3):285-301.   (Cited by 31 | Google | More links)
Abstract: Baars (1988, 1997) has proposed a psychological theory of consciousness, called global workspace theory. The present study describes a software agent implementation of that theory, called ''Conscious'' Mattie (CMattie). CMattie operates in a clerical domain from within a UNIX operating system, sending messages and interpreting messages in natural language that organize seminars at a university. CMattie fleshes out global workspace theory with a detailed computational model that integrates contemporary architectures in cognitive science and artificial intelligence. Baars (1997) lists the psychological ''facts that any complete theory of consciousness must explain'' in his appendix to In the Theater of Consciousness; global workspace theory was designed to explain these ''facts.'' The present article discusses how the design of CMattie accounts for these facts and thereby the extent to which it implements global workspace theory
Franklin, Stan (ms). Conscious software: A computational view of mind.   (Cited by 16 | Google | More links)
Fuentes, Luis J. (2000). Dissociating components in conscious experience. European Psychologist 5 (1):13-15.   (Google)
Gregory, Richard L. (1984). Is consciousness sensational inferences? Perception 13:641-6.   (Google)
Gupta, G. C. (2005). Mathematics and consciousness. Psychological Studies 50 (2):255-258.   (Google)
Hardcastle, Valerie Gray (1995). A critique of information processing theories of consciousness. Minds and Machines 5 (1):89-107.   (Cited by 1 | Google | More links)
Abstract:   Information processing theories in psychology give rise to executive theories of consciousness. Roughly speaking, these theories maintain that consciousness is a centralized processor that we use when processing novel or complex stimuli. The computational assumptions driving the executive theories are closely tied to the computer metaphor. However, those who take the metaphor serious — as I believe psychologists who advocate the executive theories do — end up accepting too particular a notion of a computing device. In this essay, I examine the arguments from theoretical computational considerations that cognitive psychologists use to support their general approach in order to show that they make unwarranted assumptions about the processing attributes of consciousness. I then go on to examine the assumptions behind executive theories which grow out of the computer metaphor of cognitive psychology and conclude that we may not be the sort of computational machine cognitive psychology assumes and that cognitive psychology''s approach in itself does not buy us anything in developing theories of consciousness. Hence, the state space in which we may locate consciousness is vast, even within an information processing framework
Harnad, Stevan (1982). Consciousness: An afterthought. Cognition and Brain Theory 5:29-47.   (Cited by 53 | Google | More links)
Abstract: There are many possible approaches to the mind/brain problem. One of the most prominent, and perhaps the most practical, is to ignore it
Harth, E. (1996). Self-referent mechanisms as the neuronal basis of consciousness. In Stuart R. Hameroff, Alfred W. Kaszniak & A. C. Scott (eds.), Toward a Science of Consciousness. MIT Press.   (Google)
Harth, E. (1993). The Creative Loop: How the Brain Makes a Mind. Addison Wesley.   (Cited by 72 | Google)
Harth, E. (1995). The sketchpad model: A theory of consciousness, perception, and imagery. Consciousness and Cognition 4:346-68.   (Cited by 8 | Google)
Ieshima, Takeshi & Tokosumi, Akifumi (2002). Modularity and hierarchy: A theory of consciousness based on the fractal neural network. In Kunio Yasue, Marj Jibu & Tarcisio Della Senta (eds.), No Matter, Never Mind: Proceedings of Toward a Science of Consciousness: Fundamental Approaches (Tokyo '99). John Benjamins.   (Cited by 1 | Google)
Jackendoff, Ray S. (1987). Consciousness and the Computational Mind. MIT Press.   (Cited by 612 | Annotation | Google | More links)
Johnson-Laird, Philip N. (1983). A computational analysis of consciousness. Cognition and Brain Theory 6:499-508.   (Cited by 30 | Google)
John, E. Roy (1976). A model of consciousness. In Gary E. Schwartz & D. H. Shapiro (eds.), Consciousness and Self-Regulation. Plenum Press.   (Cited by 16 | Google)
Joseph, Michael H. & Joseph, Samuel R. H. (2001). The contents of consciousness: From C to shining c++. Behavioral and Brain Sciences 24 (1):188-189.   (Google | More links)
Abstract: We suggest that consciousness (C) should be addressed as a multilevel concept. We can provisionally identify at least three, rather than two, levels: Gray's system should relate at least to the lowest of these three levels. Although it is unlikely to be possible to develop a behavioural test for C, it is possible to speculate as to the evolutionary advantages offered by C and how C evolved through succeeding levels. Disturbances in the relationships between the levels of C could underlie mental illness, especially schizophrenia
Kawato, M. (1997). Bidirectional theory approach to consciousness. In M. Ito, Y. Miyashita & Edmund T. Rolls (eds.), Cognition, Computation, and Consciousness. Oxford University Press.   (Cited by 9 | Google)
Khromov, Andrei G. (2001). Logical self-reference as a model for conscious experience. Journal of Mathematical Psychology 45 (5):720-731.   (Google | More links)
Lauro-Grotto, R.; Reich, S. & Virasoro, M. A. (1997). The computational role of conscious processing in a model of semantic memory. In M. Ito, Y. Miyashita & Edmund T. Rolls (eds.), Cognition, Computation, and Consciousness. Oxford University Press.   (Cited by 12 | Google)
Lesley, Joan (2006). Awareness is relative: Dissociation as the organisation of meaning. Consciousness and Cognition 15 (3):593-604.   (Google)
Lloyd, Dan (1995). Consciousness: A connectionist manifesto. Minds and Machines 5 (2):161-85.   (Cited by 16 | Google | More links)
Abstract:   Connectionism and phenomenology can mutually inform and mutually constrain each other. In this manifesto I outline an approach to consciousness based on distinctions developed by connectionists. Two core identities are central to a connectionist theory of consciouness: conscious states of mind are identical to occurrent activation patterns of processing units; and the variable dispositional strengths on connections between units store latent and unconscious information. Within this broad framework, a connectionist model of consciousness succeeds according to the degree of correspondence between the content of human consciousness (the world as it is experienced) and the interpreted content of the network. Constitutive self-awareness and reflective self-awareness can be captured in a model through its ability to respond to self-reflexive information, identify self-referential categories, and process information in the absence of simultaneous input. The qualitative feel of sensation appears in a model as states of activation that are not fully discriminated by later processing. Connectionism also uniquely explains several specific features of experience. The most important of these is the superposition of information in consciousness — our ability to perceive more than meets the eye, and to apprehend complex categorical and temporal information in a single highly-cognized glance. This superposition in experience matches a superposition of representational content in distributed representations
Lloyd, Dan (1996). Consciousness, connectionism, and cognitive neuroscience: A meeting of the minds. Philosophical Psychology 9 (1):61-78.   (Cited by 8 | Google)
Abstract: Accounting for phenomenal structure—the forms, aspects, and features of conscious experience—poses a deep challenge for the scientific study of consciousness, but rather than abandon hope I propose a way forward. Connectionism, I argue, offers a bi-directional analogy, with its oft-noted “neural inspiration” on the one hand, and its largely unnoticed capacity to illuminate our phenomenology on the other. Specifically, distributed representations in a recurrent network enable networks to superpose categorical, contextual, and temporal information on a specific input representation, much as our own experience does. Artificial neural networks also suggest analogues of four salient distinctions between sensory and nonsensoty consciousness. The paper concludes with speculative proposals for discharging the connectionist heuristics to leave a robust, detailed empirical theory of consciousness
Maia, Tiago V. & Cleeremans, Axel (2005). Consciousness: Converging insights from connectionist modeling and neuroscience. Trends in Cognitive Sciences 9 (8):397-404.   (Cited by 8 | Google | More links)
Mathis, D. W. & Moxer, M. (1995). On the computational utility of consciousness. In G. Tesauro, D. Touretzky & T. Leen (eds.), Advances in Neural Information Processing Systems 7. MIT Press.   (Cited by 13 | Google | More links)
McDermott, Josh (1995). Global workspace theory: Consciousness explained? Harvard Brain 2 (1).   (Google | More links)
Abstract: The subject of consciousness, long shunned by mainstream psychology and the scientific community, has over the last two decades become a legitimate topic of scientific research. One of the most thorough attempts to formulate a theory of consciousness has come from Bernard Baars, a psychologist working at the Wright Institute. Baars proposes that consciousness is the result of a Global Workspace in the brain that distributes information to the huge number of parallel unconscious processors that form the rest of the brain. This paper critiques the central hypothesis of Baars' theory of consciousness
McGovern, Katherine & Baars, Bernard J. (2007). Cognitive theories of consciousness. In Philip David Zelazo, Morris Moscovitch & Evan Thompson (eds.), The Cambridge Handbook of Consciousness. Cambridge.   (Google)
McKee, George (online). The engine of awareness: Autonomous synchronous representations.   (Google | More links)
Abstract: Objections to functional explanations of awareness assert that although functional systems may be adequate to explain behavior, including verbal behavior consisting of assertions of awareness by an individual, they cannot provide for the existence of phenomenal awareness. In this paper, a theory of awareness is proposed that counters this assertion by incorporating two advances: (1) a formal definition of representation, expressed in a functional notation: Newell's Representation Law, and 2) the introduction of real time into the analysis of awareness. This leads to the definition of phenomenal awareness as existing whenever an object contains an autonomously updated configuration satisfying the Representation Law with respect to some aspects of its environment. The relational aspect of the Representation Law permits the development of multiple levels of awareness, which provides for the existence of illusions and hallucinations, and permits the identification of a new measure, accuracy of awareness . The relational perspective also permits the incorporation of referential concepts into the framework. Qualia can then be identified with referentially opaque elements of awareness. The functional form of the Representation Law is linked to neurophysiology and the underlying phenomena of chemistry and physics by phenomena involving activity-dependent connectivity
Michie, Donald (1994). Consciousness as an engineering issue (parts 1 and 2). Journal of Consciousness Studies 1 (1):192-95.   (Google)
Michie, Donald (1995). Consciousness as an engineering issue, part. Journal of Consciousness Studies 2 (1):52-66.   (Cited by 9 | Google | More links)
Morsella, Ezequiel (2005). The function of phenomenal states: Supramodular interaction theory. Psychological Review 112 (4):1000-1021.   (Google)
Moura, Ivan (2006). A model of agent consciousness and its implementation. Neurocomputing 69 (16-18):1984-1995.   (Google)
Negatu, Aregahegn S. & Franklin, Stan (2002). An action selection mechanism for "conscious" software agents. Cognitive Science Quarterly. Special Issue 2 (3):362-384.   (Cited by 15 | Google)
Norretranders, T. (1991). The User Illusion: Cutting Consciousness Down to Size. Viking Penguin.   (Cited by 4 | Google)
Oatley, Keith (1981). Representing ourselves: Mental schemata, computational metaphors, and the nature of consciousness. In G. Underwood & R. Stevens (eds.), Aspects of Consciousness, Volume 2. Academic Press.   (Cited by 2 | Google)
O'Brien, Gerard & Opie, Jonathan (1999). A connectionist theory of phenomenal experience. [Journal (Paginated)] 22 (1):127-48.   (Google | More links)
Abstract: When cognitive scientists apply computational theory to the problem of phenomenal consciousness, as many of them have been doing recently, there are two fundamentally distinct approaches available. Either consciousness is to be explained in terms of the nature of the representational vehicles the brain deploys; or it is to be explained in terms of the computational processes defined over these vehicles. We call versions of these two approaches vehicle and process theories of consciousness, respectively. However, while there may be space for vehicle theories of consciousness in cognitive science, they are relatively rare. This is because of the influence exerted, on the one hand, by a large body of research which purports to show that the explicit representation of information in the brain and conscious experience are dissociable, and on the other, by the classical computational theory of mind – the theory that takes human cognition to be a species of symbol manipulation. But two recent developments in cognitive science combine to suggest that a reappraisal of this situation is in order. First, a number of theorists have recently been highly critical of the experimental methodologies employed in the dissociation studies – so critical, in fact, it’s no longer reasonable to assume that the dissociability of conscious experience and explicit representation has been adequately demonstrated. Second, classicism, as a theory of human cognition, is no longer as dominant in cognitive science as it once was. It now has a lively competitor in the form of connectionism; and connectionism, unlike classicism, does have the computational resources to support a robust vehicle theory of consciousness. In this paper we develop and defend this connectionist vehicle theory of consciousness. It takes the form of the following simple empirical hypothesis: phenomenal experience consists in the explicit representation of information in neurally realized PDP networks. This hypothesis leads us to re-assess some common wisdom about consciousness, but, we will argue, in fruitful and ultimately plausible ways
O'Brien, Gerard & Opie, Jonathan (2001). Connectionist vehicles, structural resemblance, and the phenomenal mind. Communication and Cognition (Special Issue) 34 (1-2):13-38.   (Google)
Abstract: We think the best prospect for a naturalistic explanation of phenomenal consciousness is to be found at the confluence of two influential ideas about the mind. The first is the _computational _ _theory of mind_: the theory that treats human cognitive processes as disciplined operations over neurally realised representing vehicles.1 The second is the _representationalist theory of _ _consciousness_: the theory that takes the phenomenal character of conscious experiences (the “what-it-is-likeness”) to be constituted by their representational content.2 Together these two theories suggest that phenomenal consciousness might be explicable in terms of the representational content of the neurally realised representing vehicles that are generated and manipulated in the course of cognition. The simplest and most elegant hypothesis that one might entertain in this regard is that conscious experiences are identical to (i.e., are one and the same as) the brain’s representing vehicles
O'Brien, Gerard & Opie, Jonathan (1999). Putting content into a vehicle theory of consciousness. Behavioral and Brain Sciences 22 (1):175-196.   (Cited by 7 | Google | More links)
Abstract: The connectionist vehicle theory of phenomenal experience in the target article identifies consciousness with the brain’s explicit representation of information in the form of stable patterns of neural activity. Commentators raise concerns about both the conceptual and empirical adequacy of this proposal. On the former front they worry about our reliance on vehicles, on representation, on stable patterns of activity, and on our identity claim. On the latter front their concerns range from the general plausibility of a vehicle theory to our specific attempts to deal with the dissociation studies. We address these concerns, and then finish by considering whether the vehicle theory we have defended has a coherent story to tell about the active, unified subject to whom conscious experiences belong
Opie, Jonathan (2000). Consciousness in the loops. Review of Cotterill, Enchanted Looms: Conscious Networks in Brains and Computers. Metascience 9 (2):277-82.   (Google)
Abstract: Consciousness is a pretty sexy topic right now, as the plethora of recent books on the subject demonstrate. Everyone is having a go at it: philosophers, psychologists, neuroscientists and physicists, to mention just a few. And for every discipline or sub-discipline that pretends to some insight on the matter we find not only a different explanatory strategy, but a different take on the explanandum – there is widespread disagreement about _what_ a theory of consciousness should actually explain. However, one thing seems to be agreed by all concerned: consciousness, whatever it is, is deeply mysterious
Parsons, T. (1953). Consciousness and symbolic processes. In H. A. Abramson (ed.), Problems of Consciousness: Transactions of the Fourth Conference. Josiah Macy Foundation.   (Google)
Parsell, Mitch (2005). Review of P.o. Haikonen's The Cognitive Approach to Conscious Machines. Psyche 11 (2).   (Google)
Abstract: Haikonen (2003) is an attempt to explicate a platform for modelling consciousness. The book sets out the foundational concepts behind Haikonen’s work in the area and proposes a particular modelling environment. This is developed in three parts: part 1 offers a brief analysis of the state of play in cognitive modelling; part 2 an extended treatment of the phenomena to be explained; part 3 promises a synthesis of the two preceding discussions to provide the necessary background and detail for the proposed modelling environment. This final part covers a broad range of technical details from the nature of the representational-computational economy instantiated, to the control of motor output, to the means of implementing emotions in artefacts. Haikonen proposes an environment based on a distributed representational economy, instantiated in a neural network architecture and trained using associative learning regimes, but which also has symbolic processing abilities to handle the critical task of generating inner language
Peters, Frederic (2010). Consciousness as Recursive, Spatiotemporal Self Location. Psychological Research.   (Google)
Abstract: At the phenomenal level, consciousness can be described as a singular, unified field of recursive self-awareness, consistently coherent in a particualr way; that of a subject located both spatially and temporally in an egocentrically-extended domain, such that conscious self-awareness is explicitly characterized by I-ness, now-ness and here-ness. The psychological mechanism underwriting this spatiotemporal self-locatedness and its recursive processing style involves an evolutionary elaboration of the basic orientative reference frame which consistently structures ongoing spatiotemporal self-location computations as i-here-now. Cognition computes action-output in the midst of ongoing movement, and consequently requires a constant self-locating spatiotemporal reference frame as basis for these computations. Over time, constant evolutionary pressures for energy efficiency have encouraged both the proliferation of anticipative feedforward processing mechansims, and the elaboration, at the apex of the sensorimotor processing hierarchy, of self-activating, highly attenuated recursively-feedforward circuitry processing the basic orientational schema independent of external action output. As the primary reference frame of active waking cognition, this recursive i-here-now processing generates a zone of subjective self-awareness in terms of which it feels like something to be oneself here and now. This is consciousness.
Phaf, R. H. & Wolters, G. (1997). A constructivist and connectionist view on conscious and nonconscious processes. Philosophical Psychology 10 (3):287-307.   (Cited by 7 | Google | More links)
Abstract: Recent experimental findings reveal dissociations of conscious and nonconscious performance in many fields of psychological research, suggesting that conscious and nonconscious effects result from qualitatively different processes. A connectionist view of these processes is put forward in which consciousness is the consequence of construction processes taking place in three types of working memory in a specific type of recurrent neural network. The recurrences arise by feeding back output to the input of a central (representational) network. They are assumed to be intemalizations of motor-sensory feedback through the environment. In this manner, a subvocal-phonological, a visuo-spatial, and a somatosensory working memory may have developed. Representations in the central network, which constitutes long-term memory, can be kept active by rehearsal in the feedback loops. The sequentially recurrent architecture allows for recursive symbolic operations and the formation of (auditory, visual, or somatic) models of the external world which can be maintained, transformed and temporarily combined with other information in working memory. Moreover, the quasi-input from the loop directs subsequent attentional processing. The view may contribute to a formal framework to accommodate findings from disparate fields such as working memory, sequential reasoning, and conscious and nonconscious processes in memory and emotion. In theory, but probably not very soon in practice, such connectionist models might simulate aspects of consciousness
Prinz, Jesse J. (2007). The intermediate level theory of consciousness. In Max Velmans & Susan Schneider (eds.), The Blackwell Companion to Consciousness. Blackwell.   (Cited by 2 | Google)
Restian, A. (1981). Informational analysis of consciousness. International Journal of Neuroscience 13:229-37.   (Google)
Revonsuo, Antti (1993). Cognitive models of consciousness. In Matti Kamppinen (ed.), Consciousness, Cognitive Schemata, and Relativism. Kluwer.   (Cited by 5 | Google)
Roberts, Hugh M. (1968). Consciousness in animals and automata. Psychological Reports 22:1226-28.   (Google)
Rockwell, W. Teed (1997). Global workspace or pandemonium? Journal of Consciousness Studies 4 (4):334-337.   (Google)
Rolls, Edmund T. (1997). Consciousness in neural networks? Neural Networks 10:1227-1303.   (Cited by 8 | Google | More links)
Rosenthal, David M. (1997). Perceptual and cognitive models of consciousness. Journal of the American Psychoanalytic Association 45.   (Cited by 1 | Google)
Sanz, Ricardo; Lopez, Ignacio; Rodriguez, Manuel & Hernandez, Carlos (2007). Principles for consciousness in integrated cognitive control. [Journal (Paginated)] 20 (9):938-946.   (Google | More links)
Abstract: In this article we will argue that given certain conditions for the evolution of bi- ological controllers, these will necessarily evolve in the direction of incorporating consciousness capabilities. We will also see what are the necessary mechanics for the provision of these capabilities and extrapolate this vision to the world of artifi- cial systems postulating seven design principles for conscious systems. This article was published in the journal Neural Networks special issue on brain and conscious- ness
Schneider, Walter E. & Pimm-Smith, M. (1997). Consciousness as a message-aware control mechanism to modulate cognitive processing. In Jonathan D. Cohen & Jonathan W. Schooler (eds.), Scientific Approaches to Consciousness. Lawrence Erlbaum.   (Cited by 3 | Google)
Schacter, Daniel L. (1989). On the relation between memory and consciousness: Dissociable interactions and conscious experience. In Henry L. I. Roediger & Fergus I. M. Craik (eds.), Varieties of Memory and Consciousness.   (Cited by 99 | Google)
Shanahan, Murray (2006). A cognitive architecture that combines internal simulation with a global workspace. Consciousness and Cognition 15 (2):433-449.   (Cited by 2 | Google)
Shanon, Benny (2001). Against the spotlight model of consciousness. New Ideas in Psychology 19 (1):77-84.   (Cited by 2 | Google)
Shallice, T. (1972). Dual functions of consciousness. Psychological Review 79:383-93.   (Cited by 67 | Google)
Shanahan, Murray (2005). Global access, embodiment, and the conscious subject. Journal of Consciousness Studies 12 (12):46-66.   (Cited by 2 | Google)
Shallice, T. (1988). Information-processing models of consciousness: Possibilities and problems. In Anthony J. Marcel & E. Bisiach (eds.), Consciousness in Contemporary Science. Oxford University Press.   (Cited by 36 | Google)
Shallice, T. (1978). The dominant action system: An information-processing approach to consciousness. In K. S. Pope & Jerome L. Singer (eds.), The Stream of Consciousness: Scientific Investigation Into the Flow of Experience. Plenum.   (Cited by 25 | Google)
Sommerhoff, G. & MacDorman, Karl F. (1994). An account of consciousness in physical and functional terms: A target for research in the neurosciences. Integrative Physiological and Behavioral Science 29:151-81.   (Cited by 3 | Google)
Sommerhoff, G. (1996). Consciousness as an internal integrating system. Journal of Consciousness Studies 3:139-57.   (Cited by 2 | Google)
Strehler, B. L. (1989). Monitors: Key mechanisms and roles in the development and aging of the consciousness and self. Mechanisms of Ageing and Development 47:85-132.   (Google | More links)
Sun, Ron (1999). Accounting for the computational basis of consciousness: A connectionist approach. Consciousness and Cognition 8 (4):529-565.   (Cited by 14 | Google | More links)
Abstract: This paper argues for an explanation of the mechanistic (computational) basis of consciousness that is based on the distinction between localist (symbolic) representation and distributed representation, the ideas of which have been put forth in the connectionist literature. A model is developed to substantiate and test this approach. The paper also explores the issue of the functional roles of consciousness, in relation to the proposed mechanistic explanation of consciousness. The model, embodying the representational difference, is able to account for the functional role of consciousness, in the form of the synergy between the conscious and the unconscious. The fit between the model and various cognitive phenomena and data (documented in the psychological literatures) is discussed to accentuate the plausibility of the model and its explanation of consciousness. Comparisons with existing models of consciousness are made in the end
Sun, Ron (2004). Criteria for an effective theory of consciousness and some preliminary attempts. Consciousness and Cognition 13 (2):268-301.   (Google | More links)
Abstract: In the physical sciences a rigorous theory is a hierarchy of descriptions in which causal relationships between many general types of entity at a phenomenological level can be derived from causal relationships between smaller numbers of simpler entities at more detailed levels. The hierarchy of descriptions resembles the modular hierarchy created in electronic systems in order to be able to modify a complex functionality without excessive side effects. Such a hierarchy would make it possible to establish a rigorous scientific theory of consciousness. The causal relationships implicit in definitions of access consciousness and phe- nomenal consciousness are made explicit, and the corresponding causal relationships at the more detailed levels of perception, memory, and skill learning described. Extension of these causal relationships to physiological and neural levels is discussed. The general capability of a range of current consciousness models to support a modular hierarchy which could generate these causal relationships is reviewed, and the specific capabilities of two models with good general capabilities are compared in some detail. Ó 2003 Elsevier Inc. All rights reserved
Sun, Ron & Franklin, Stan (2007). Computational models of consciousness: A taxonomy and some examples. In Philip David Zelazo, Morris Moscovitch & Evan Thompson (eds.), The Cambridge Handbook of Consciousness. Cambridge.   (Google)
Sun, Ron (2001). Computation, reduction, and teleology of consciousness. Cognitive Systems Research 1 (1):241-249.   (Cited by 5 | Google | More links)
Abstract: This paper aims to explore mechanistic and teleological explanations of consciousness. In terms of mechanistic explanations, it critiques various existing views, especially those embodied by existing computational cognitive models. In this regard, the paper argues in favor of the explanation based on the distinction between localist (symbolic) representation and distributed representation (as formulated in the connectionist literature), which reduces the phenomenological difference to a mechanistic difference. Furthermore, to establish a teleological explanation of consciousness, the paper discusses the issue of the functional role of consciousness on the basis of the aforementioned mechanistic explanation. A proposal based on synergistic interaction between the conscious and the unconscious is advanced that encompasses various existing views concerning the functional role of consciousness. This two-step deepening explanation has some empirical support, in the form of a cognitive model and various cognitive data that it captures. © 2001 Elsevier Science B.V. All rights reserved
Sun, Ron (1997). Learning, action, and consciousness: A hybrid approach toward modeling consciousness. Neural Networks 10:1317-33.   (Cited by 45 | Google | More links)
Abstract: _role, especially in learning, and through devising hybrid neural network models that (in a qualitative manner) approxi-_ _mate characteristics of human consciousness. In doing so, the paper examines explicit and implicit learning in a variety_ _of psychological experiments and delineates the conscious/unconscious distinction in terms of the two types of learning_ _and their respective products. The distinctions are captured in a two-level action-based model C_larion_. Some funda-_ _mental theoretical issues are also clari?ed with the help of the model. Comparisons with existing models of conscious-_
Sviderskaya, N. E. (1991). Consciousness and information selection. Neuroscience and Behavioral Physiology 21:526-31.   (Google | More links)
Taylor, John G. (1996). A competition for consciousness? Neurocomputing 11:271-96.   (Cited by 9 | Google | More links)
Taylor, Kenneth A. (2001). Applying continuous modelling to consciousness. Journal of Consciousness Studies 8 (2):45-60.   (Cited by 3 | Google)
Taylor, John G. (1998). Constructing the relational mind. Psyche 4 (10).   (Cited by 6 | Google)
Taylor, John G. (ms). Modeling consciousness.   (Google)
Taylor, John G. (1996). Modeling what it is like to be. In Stuart R. Hameroff, Alfred W. Kaszniak & A. C. Scott (eds.), Toward a Science of Consciousness. MIT Press.   (Cited by 3 | Google)
Taylor, John G. & Mueller-Gaertner, H. (1997). Non-invasive analysis of awareness. Neural Networks 10:1185-1194.   (Cited by 4 | Google | More links)
Taylor, John G. (1997). Neural networks for consciousness. Neural Networks 10:1207-27.   (Cited by 24 | Google | More links)
Taylor, John G. (1997). The emergence of mind. Communication and Cognition 30 (3-4):301-343.   (Cited by 3 | Google)
Taylor, John G. (2007). Through machine attention to machine consciousness. In Antonio Chella & Riccardo Manzotti (eds.), Artificial Consciousness. Imprint Academic.   (Google)
Toates, Frederick (2006). A model of the hierarchy of behaviour, cognition, and consciousness. Consciousness and Cognition 15 (1):75-118.   (Cited by 2 | Google)
Tononi, Giulio Srinivasan (2006). Consciousness, information integration and the brain. In Steven Laureys (ed.), Boundaries of Consciousness. Elsevier.   (Cited by 6 | Google)
Tononi, Giulio Srinivasan (2007). The information integration theory of consciousness. In Max Velmans & Susan Schneider (eds.), The Blackwell Companion to Consciousness. Blackwell.   (Cited by 36 | Google | More links)
van Leeuwen, Cees (2007). What needs to emerge to make you conscious? Journal of Consciousness Studies 14 (1):115-136.   (Cited by 1 | Google)
Abstract: Perceptual experience can be explained by contextualized brain dynamics. An inner loop of ongoing activity within the brain produces dynamic patterns of synchronization and de- synchronization that are necessary, but not sufficient, for visual experience. This inner loop is controlled by evolution, development, socialization, learning, task and perception- action contingencies, which constitute an outer loop. This outer loop is sufficient, but not necessary, for visual experience. Jointly, the inner and outer loop may offer sufficient and necessary conditions for the emergence of visual experience. This hypothesis has methodological, empirical, theoretical, and philosophical implications
von der Malsburg, Christoph (1997). The coherence definition of consciousness. In M. Ito, Y. Miyashita & Edmund T. Rolls (eds.), [Book Chapter]. Oxford University Press.   (Cited by 16 | Google | More links)
Abstract: I will focus in this essay on a riddle that in my view is central to the consciousness issue: How does the mind or brain create the unity we perceive out of the diversity that we know is there? I contend this is a technical issue, not a philosophical one, although its resolution will have profound philosophical repercussions, and although we have at present little more than the philosophical method to attack it
Wallace, Rodrick (ms). A modular network treatment of Baars' global workspace consciousness model.   (Cited by 4 | Google)
Abstract: Network theory provides an alternative to the renormalization and phase transition methods used in Wallace's (2005a) treatment of Baars' Global Workspace model. Like the earlier study, the new analysis produces the workplace itself, the tunable threshold of consciousness, and the essential role for embedding contexts, in an explicitly analytic 'necessary conditions' manner which suffers neither the mereological fallacy inherent to brain-only theories nor the sufficiency indeterminacy of neural network or agent-based simulations. This suggests that the new approach, and the earlier, represent different analytically solvable limits in a broad continuum of possible models, analogous to the differences between bond and site percolation or between the two and many-body limits of classical mechanics. The development significantly extends the theoretical foundations for an empirical general cognitive model (GCM) based on the Shannon-McMillan Theorem. Patterned after the general linear model which reflects the Central Limit Theorem, the proposed technique should be both useful for the reduction of expermiental data on consciousness and in the design of devices with capacities which may transcend those of conventional machines and provide new perspectives on the varieties of biological consciousness
Wallace, Rodrick (ms). Entering the blackboard jungle: Canonical dysfunction in conscious machines.   (Google | More links)
Abstract: The central paradigm of Artificial Intelligence is rapidly shifting toward biological models for both robotic devices and systems performing such critical tasks as network management and process control. Here we apply recent mathematical analysis of the necessary conditions for consciousness in humans in an attempt to gain some understanding of the likely canonical failure modes inherent to a broad class of global workspace/blackboard machines designed to emulate biological functions. Similar problems are likely to confront other possible architectures, although their mathematical description may be far less straightforward
Werbos, P. (1997). Optimization: A foundation for understanding consciousness. In D. Levine & W. Elsberry (eds.), Optimality in Biological and Artificial Networks? Lawrence Erlbaum.   (Cited by 6 | Google)
Zeman, J. (1971). Consciousness as information Channel. Teorie a Metoda 3:97-100.   (Google)