Javascript Menu by Deluxe-Menu.com
MindPapers is now part of PhilPapers: online research in philosophy, a new service with many more features.
 
 Compiled by David Chalmers (Editor) & David Bourget (Assistant Editor), Australian National University. Submit an entry.
 
   
click here for help on how to search

6.5a. Computation and Physical Systems (Computation and Physical Systems on PhilPapers)

See also:
Bishop, Michael A. (2002). Counterfactuals cannot count: A rejoinder to David Chalmers. Consciousness and Cognition 11:642-52.   (Cited by 1 | Google | More links)
Boucher, Andrew (1997). Parallel machines. Minds and Machines 7 (4):543-551.   (Cited by 3 | Google | More links)
Abstract:   Because it is time-dependent, parallel computation is fundamentally different from sequential computation. Parallel programs are non-deterministic and are not effective procedures. Given the brain operates in parallel, this casts doubt on AI's attempt to make sequential computers intelligent
Boyle, C. F. (1994). Computation as an intrinsic property. Minds and Machines 4 (4):451-67.   (Cited by 2 | Google | More links)
Abstract:   In an effort to uncover fundamental differences between computers and brains, this paper identifies computation with a particular kind of physical process, in contrast to interpreting the behaviors of physical systems as one or more abstract computations. That is, whether or not a system is computing depends on how those aspects of the system we consider to be informational physically cause change rather than on our capacity to describe its behaviors in computational terms. A physical framework based on the notion of causal mechanism is used to distinguish different kinds of information processing in a physically-principled way; each information processing type is associated with a particular causal mechanism. The causal mechanism associated with computation is pattern matching, which isphysically defined as the fitting of physical structures such that they cause a simple change. It is argued that information processing in the brain is based on a causal mechanism different than pattern matching so defined, implying that brains do not compute, at least not in the physical sense that digital computers do. This causal difference may also mean that computers cannot have mental states
Brown, Curtis (2004). Implementation and indeterminacy. Conferences in Research and Practice in Information Technology 37.   (Google | More links)
Abstract: David Chalmers has defended an account of what it is for a physical system to implement a computation. The account appeals to the idea of a “combinatorial-state automaton” or CSA. It is unclear whether Chalmers intends the CSA to be a computational model in the usual sense, or merely a convenient formalism into which instances of other models can be translated. I argue that the CSA is not a computational model in the usual sense because CSAs do not perspicuously represent algorithms, are too powerful both in that they can perform any computation in a single step and in that without so far unspecified restrictions they can “compute” the uncomputable, and are too loosely related to physical implementations
Broderick, Paul Bohan (2004). On communication and computation. Minds and Machines 14 (1).   (Cited by 4 | Google | More links)
Abstract: Comparing technical notions of communication and computation leads to a surprising result, these notions are often not conceptually distinguishable. This paper will show how the two notions may fail to be clearly distinguished from each other. The most famous models of computation and communication, Turing Machines and (Shannon-style) information sources, are considered. The most significant difference lies in the types of state-transitions allowed in each sort of model. This difference does not correspond to the difference that would be expected after considering the ordinary usage of these terms. However, the natural usage of these terms are surprisingly difficult to distinguish from each other. The two notions may be kept distinct if computation is limited to actions within a system and communications is an interaction between a system and its environment. Unfortunately, this decision requires giving up much of the nuance associated with natural language versions of these important terms
Chalmers, David J. (1996). Does a rock implement every finite-state automaton? Synthese 108 (3):309-33.   (Cited by 38 | Annotation | Google | More links)
Abstract:   Hilary Putnam has argued that computational functionalism cannot serve as a foundation for the study of the mind, as every ordinary open physical system implements every finite-state automaton. I argue that Putnam's argument fails, but that it points out the need for a better understanding of the bridge between the theory of computation and the theory of physical systems: the relation of implementation. It also raises questions about the class of automata that can serve as a basis for understanding the mind. I develop an account of implementation, linked to an appropriate class of automata, such that the requirement that a system implement a given automaton places a very strong constraint on the system. This clears the way for computation to play a central role in the analysis of mind
Chalmers, David J. (1994). On implementing a computation. Minds and Machines 4 (4):391-402.   (Cited by 26 | Annotation | Google | More links)
Abstract:   To clarify the notion of computation and its role in cognitive science, we need an account of implementation, the nexus between abstract computations and physical systems. I provide such an account, based on the idea that a physical system implements a computation if the causal structure of the system mirrors the formal structure of the computation. The account is developed for the class of combinatorial-state automata, but is sufficiently general to cover all other discrete computational formalisms. The implementation relation is non-vacuous, so that criticisms by Searle and others fail. This account of computation can be extended to justify the foundational role of computation in artificial intelligence and cognitive science
Chrisley, Ronald L. (1994). The ontological status of computational states. In European Review of Philosophy, Volume 1: Philosophy of Mind. Stanford: CSLI Publications.   (Google)
Chrisley, Ronald L. (1994). Why everything doesn't realize every computation. Minds and Machines 4 (4):403-20.   (Cited by 14 | Google | More links)
Abstract:   Some have suggested that there is no fact to the matter as to whether or not a particular physical system relaizes a particular computational description. This suggestion has been taken to imply that computational states are not real, and cannot, for example, provide a foundation for the cognitive sciences. In particular, Putnam has argued that every ordinary open physical system realizes every abstract finite automaton, implying that the fact that a particular computational characterization applies to a physical system does not tell oneanything about the nature of that system. Putnam''s argument is scrutinized, and found inadequate because, among other things, it employs a notion of causation that is too weak. I argue that if one''s view of computation involves embeddedness (inputs and outputs) and full causality, one can avoid the universal realizability results. Therefore, the fact that a particular system realizes a particular automaton is not a vacuous one, and is often explanatory. Furthermore, I claim that computation would not necessarily be an explanatorily vacuous notion even if it were universally realizable
Cleland, Carol E. (1995). Effective procedures and computable functions. Minds and Machines 5 (1):9-23.   (Cited by 23 | Google | More links)
Abstract:   Horsten and Roelants have raised a number of important questions about my analysis of effective procedures and my evaluation of the Church-Turing thesis. They suggest that, on my account, effective procedures cannot enter the mathematical world because they have a built-in component of causality, and, hence, that my arguments against the Church-Turing thesis miss the mark. Unfortunately, however, their reasoning is based upon a number of misunderstandings. Effective mundane procedures do not, on my view, provide an analysis of ourgeneral concept of an effective procedure; mundane procedures and Turing machine procedures are different kinds of procedure. Moreover, the same sequence ofparticular physical action can realize both a mundane procedure and a Turing machine procedure; it is sequences of particular physical actions, not mundane procedures, which enter the world of mathematics. I conclude by discussing whether genuinely continuous physical processes can enter the world of real numbers and compute real-valued functions. I argue that the same kind of correspondence assumptions that are made between non-numerical structures and the natural numbers, in the case of Turing machines and personal computers, can be made in the case of genuinely continuous, physical processes and the real numbers
Cleland, Carol E. (1993). Is the church-Turing thesis true? Minds and Machines 3 (3):283-312.   (Cited by 58 | Annotation | Google | More links)
Abstract:   The Church-Turing thesis makes a bold claim about the theoretical limits to computation. It is based upon independent analyses of the general notion of an effective procedure proposed by Alan Turing and Alonzo Church in the 1930''s. As originally construed, the thesis applied only to the number theoretic functions; it amounted to the claim that there were no number theoretic functions which couldn''t be computed by a Turing machine but could be computed by means of some other kind of effective procedure. Since that time, however, other interpretations of the thesis have appeared in the literature. In this paper I identify three domains of application which have been claimed for the thesis: (1) the number theoretic functions; (2) all functions; (3) mental and/or physical phenomena. Subsequently, I provide an analysis of our intuitive concept of a procedure which, unlike Turing''s, is based upon ordinary, everyday procedures such as recipes, directions and methods; I call them mundane procedures. I argue that mundane procedures can be said to be effective in the same sense in which Turing machine procedures can be said to be effective. I also argue that mundane procedures differ from Turing machine procedures in a fundamental way, viz., the former, but not the latter, generate causal processes. I apply my analysis to all three of the above mentioned interpretations of the Church-Turing thesis, arguing that the thesis is (i) clearly false under interpretation (3), (ii) false in at least some possible worlds (perhaps even in the actual world) under interpretation (2), and (iii) very much open to question under interpretation (1)
Cleland, Carol E. (2001). Recipes, algorithms, and programs. Minds and Machines 11 (2):219-237.   (Cited by 11 | Google | More links)
Abstract:   In the technical literature of computer science, the concept of an effective procedure is closely associated with the notion of an instruction that precisely specifies an action. Turing machine instructions are held up as providing paragons of instructions that "precisely describe" or "well define" the actions they prescribe. Numerical algorithms and computer programs are judged effective just insofar as they are thought to be translatable into Turing machine programs. Nontechnical procedures (e.g., recipes, methods) are summarily dismissed as ineffective on the grounds that their instructions lack the requisite precision. But despite the pivotal role played by the notion of a precisely specified instruction in classifying procedures as effective and ineffective, little attention has been paid to the manner in which instructions "precisely specify" the actions they prescribe. It is the purpose of this paper to remedy this defect. The results are startling. The reputed exemplary precision of Turing machine instructions turns out to be a myth. Indeed, the most precise specifications of action are provided not by the procedures of theoretical computer science and mathematics (algorithms) but rather by the nontechnical procedures of everyday life. I close with a discussion of some of the rumifications of these conclusions for understanding and designing concrete computers and their programming languages
Cocos, Cristian (2002). Computational processes: A reply to Chalmers and Copeland. Sats 3 (2):25-49.   (Google | More links)
Copeland, Jack (1999). Beyond the universal Turing machine. Australasian Journal of Philosophy 77 (1):46-67.   (Cited by 44 | Google | More links)
Abstract: We describe an emerging field, that of nonclassical computability and nonclassical computing machinery. According to the nonclassicist, the set of well-defined computations is not exhausted by the computations that can be carried out by a Turing machine. We provide an overview of the field and a philosophical defence of its foundations
Copeland, Jack (1998). Super Turing-machines. Complexity 4:30-32.   (Cited by 31 | Google | More links)
Abstract: The tape is divided into squares, each square bearing a single symbol—'0' or '1', for example. This tape is the machine's general-purpose storage medium: the machine is set in motion with its input inscribed on the tape, output is written onto the tape by the head, and the tape serves as a short-term working memory for the results of intermediate steps of the computation. The program governing the particular computation that the machine is to perform is also stored on the tape. A small, fixed program that is 'hard-wired' into the head enables the head to read and execute the instructions of whatever program is on the tape. The machine's atomic operations are very simple—for example, 'move left one square', 'move right one square', 'identify the symbol currently beneath the head', 'write 1 on the square that is beneath the head', and 'write 0 on the square that is beneath the head'. Complexity of operation is achieved by the chaining together of large numbers of these simple atoms. Any universal Turing machine can be programmed to carry out any calculation that can be performed by a human mathematician working with paper and pencil in accordance with some algorithmic method. This is what is meant by calling these machines 'universal'
Copeland, Jack (1997). The broad conception of computation. American Behavioral Scientist 40 (6):690-716.   (Cited by 56 | Google | More links)
Abstract: A myth has arisen concerning Turing's paper of 1936, namely that Turing set forth a fundamental principle concerning the limits of what can be computed by machine - a myth that has passed into cognitive science and the philosophy of mind, to wide and pernicious effect. This supposed principle, sometimes incorrectly termed the 'Church-Turing thesis', is the claim that the class of functions that can be computed by machines is identical to the class of functions that can be computed by Turing machines. In point of fact Turing himself nowhere endorses, nor even states, this claim (nor does Church). I describe a number of notional machines, both analogue and digital, that can compute more than a universal Turing machine. These machines are exemplars of the class of _nonclassical_ computing machines. Nothing known at present rules out the possibility that machines in this class will one day be built, nor that the brain itself is such a machine. These theoretical considerations undercut a number of foundational arguments that are commonly rehearsed in cognitive science, and gesture towards a new class of cognitive models
Copeland, B. Jack (1996). What is computation? Synthese 108 (3):335-59.   (Cited by 41 | Google | More links)
Abstract:   To compute is to execute an algorithm. More precisely, to say that a device or organ computes is to say that there exists a modelling relationship of a certain kind between it and a formal specification of an algorithm and supporting architecture. The key issue is to delimit the phrase of a certain kind. I call this the problem of distinguishing between standard and nonstandard models of computation. The successful drawing of this distinction guards Turing's 1936 analysis of computation against a difficulty that has persistently been raised against it, and undercuts various objections that have been made to the computational theory of mind
Endicott, Ronald P. (1996). Searle, syntax, and observer-relativity. Canadian Journal of Philosophy 26 (1):101-22.   (Cited by 3 | Google)
Abstract: I critically examine some provocative arguments that John Searle presents in his book The Rediscovery of Mind to support the claim that the syntactic states of a classical computational system are "observer relative" or "mind dependent" or otherwise less than fully and objectively real. I begin by explaining how this claim differs from Searle's earlier and more well-known claim that the physical states of a machine, including the syntactic states, are insufficient to determine its semantics. In contrast, his more recent claim concerns the syntax, in particular, whether a machine actually has symbols to underlie its semantics. I then present and respond to a number of arguments that Searle offers to support this claim, including whether machine symbols are observer relative because the assignment of syntax is arbitrary, or linked to universal realizability, or linked to the sub-personal interpretive acts of a homunculus, or linked to a person's consciousness. I conclude that a realist about the computational model need not be troubled by such arguments. Their key premises need further support.
Foster, C. (1990). Algorithms, Abstraction and Implementation. Academic Press.   (Cited by 13 | Annotation | Google)
Goel, Vinod (1991). Notationality and the information processing mind. Minds and Machines 1 (2):129-166.   (Cited by 14 | Annotation | Google | More links)
Abstract:   Cognitive science uses the notion of computational information processing to explain cognitive information processing. Some philosophers have argued that anything can be described as doing computational information processing; if so, it is a vacuous notion for explanatory purposes.An attempt is made to explicate the notions of cognitive information processing and computational information processing and to specify the relationship between them. It is demonstrated that the resulting notion of computational information processing can only be realized in a restrictive class of dynamical systems called physical notational systems (after Goodman's theory of notationality), and that the systems generally appealed to by cognitive science-physical symbol systems-are indeed such systems. Furthermore, it turns out that other alternative conceptions of computational information processing, Fodor's (1975) Language of Thought and Cummins' (1989) Interpretational Semantics appeal to substantially the same restrictive class of systems
Hardcastle, Valerie Gray (1995). Computationalism. Synthese 105 (3):303-17.   (Cited by 8 | Annotation | Google | More links)
Abstract:   What counts as a computation and how it relates to cognitive function are important questions for scientists interested in understanding how the mind thinks. This paper argues that pragmatic aspects of explanation ultimately determine how we answer those questions by examining what is needed to make rigorous the notion of computation used in the (cognitive) sciences. It (1) outlines the connection between the Church-Turing Thesis and computational theories of physical systems, (2) differentiates merely satisfying a computational function from true computation, and finally (3) relates how we determine a true computation to the functional methodology in cognitive science. All of the discussion will be directed toward showing that the only way to connect formal notions of computation to empirical theory will be in virtue of the pragmatic aspects of explanation
Haugeland, John (2003). Syntax, semantics, physics. In John M. Preston & Michael A. Bishop (eds.), Views Into the Chinese Room: New Essays on Searle and Artificial Intelligence. Oxford University Press.   (Cited by 8 | Google)
Horsten, Leon (1995). The church-Turing thesis and effective mundane procedures. Minds and Machines 5 (1):1-8.   (Cited by 4 | Google | More links)
Abstract:   We critically discuss Cleland''s analysis of effective procedures as mundane effective procedures. She argues that Turing machines cannot carry out mundane procedures, since Turing machines are abstract entities and therefore cannot generate the causal processes that are generated by mundane procedures. We argue that if Turing machines cannot enter the physical world, then it is hard to see how Cleland''s mundane procedures can enter the world of numbers. Hence her arguments against versions of the Church-Turing thesis for number theoretic functions miss the mark
Ingarden, Roman Stanisław (2002). Open systems and consciousness: A philosophical discussion. Open Systems and Information Dynamics 9:125-151.   (Google | More links)
Kentridge, Robert W. (1995). Symbols, neurons, soap-bubbles and the neural computation underlying cognition. Minds and Machines 4 (4).   (Cited by 3 | Google | More links)
Abstract: A wide range of systems appear to perform computation: what common features do they share? I consider three examples, a digital computer, a neural network and an analogue route finding system based on soap-bubbles. The common feature of these systems is that they have autonomous dynamics — their states will change over time without additional external influence. We can take advantage of these dynamics if we understand them well enough to map a problem we want to solve onto them. Programming consists of arranging the starting state of a system so that the effects of the system''s dynamics on some of its variables corresponds to the effects of the equations which describe the problem to be solved on their variables. The measured dynamics of a system, and hence the computation it may be performing, depend on the variables of the system we choose to attend to. Although we cannot determine which are the appropriate variables to measure in a system whose computation basis is unknown to us I go on to discuss how grammatical classifications of computational tasks and symbolic machine reconstruction techniques may allow us to rule out some measurements of a system from contributing to computation of particular tasks. Finally I suggest that these arguments and techniques imply that symbolic descriptions of the computation underlying cognition should be stochastic and that symbols in these descriptions may not be atomic but may have contents in alternative descriptions
Klein, Colin (web). Dispositional implementation solves the superfluous structure problem. Synthese 165 (1).   (Google | More links)
Abstract: Consciousness supervenes on activity; computation supervenes on structure. Because of this, some argue, conscious states cannot supervene on computational ones. If true, this would present serious di?culties for computationalist analyses of consciousness (or, indeed, of any domain with properties that supervene on actual activity). I argue that the computationalist can avoid the Super?uous Structure Problem by moving to a dispositional theory of implementation. On a dispositional theory, the activity of computation depends entirely on changes in the intrinsic properties of implementing material. As extraneous structure is not required for computation, a system can implement a program running on some but not all possible inputs. Dispositional computationalism thus permits episodes of computational activity that correspond to potential episodes of conscious awareness. The Super?uous Structure Problem cannot be motivated against this account, and so computationalism may be preserved
Klein, Colin (ms). Maudlin on computation.   (Cited by 1 | Google)
Abstract: I argue that computationalism is compatible with a plausible supervenience thesis about conscious states. The most plausible way of making it compatible, however, involves abandoning counterfactual conditions on implementation
MacLennan, Bruce J. (1993). Grounding analog computers. [Journal (Paginated)] 2:8-51.   (Cited by 16 | Google | More links)
Abstract: In this commentary on Harnad's "Grounding Symbols in the Analog World with Neural Nets: A Hybrid Model," the issues of symbol grounding and analog (continuous) computation are separated, it is argued that symbol graounding is as important an issue for analog cognitive models as for digital (discrete) models. The similarities and differences between continuous and discrete computation are discussed, as well as the grounding of continuous representations. A continuous analog of the Chinese Room is presented
Maclennan, B. (2003). Transcending Turing computability. Minds and Machines 13 (1):3-22.   (Cited by 12 | Google | More links)
Abstract:   It has been argued that neural networks and other forms of analog computation may transcend the limits of Turing-machine computation; proofs have been offered on both sides, subject to differing assumptions. In this article I argue that the important comparisons between the two models of computation are not so much mathematical as epistemological. The Turing-machine model makes assumptions about information representation and processing that are badly matched to the realities of natural computation (information representation and processing in or inspired by natural systems). This points to the need for new models of computation addressing issues orthogonal to those that have occupied the traditional theory of computation
Mallah, Jacques (ms). The many computations interpretation (MCI) of quantum mechanics.   (Google | More links)
Abstract: Computationalism provides a framework for understanding how a mathematically describable physical world could give rise to conscious observations without the need for dualism. A criterion is proposed for the implementation of computations by physical systems, which has been a problem for computationalism. Together with an independence criterion for implementations this would allow, in principle, prediction of probabilities for various observations based on counting implementations. Applied to quantum mechanics, this results in a Many Computations Interpretation (MCI), which is an explicit form of the Everett style Many Worlds Interpretation (MWI). Derivation of the Born Rule emerges as the central problem for most realist interpretations of quantum mechanics. If the Born Rule is derived based on computationalism and the wavefunction it would provide strong support for the MWI; but if the Born Rule is shown not to follow from these to an experimentally falsified extent, it would indicate the necessity for either new physics or (more radically) new philosophy of mind.
Miłkowski, Marcin (2009). Is Evolution Algorithmic? Minds and Machines 19 (4):465-475.   (Google | More links)
Abstract: In Darwin’s Dangerous Idea, Daniel Dennett claims that evolution is algorithmic. On Dennett’s analysis, evolutionary processes are trivially algorithmic because he assumes that all natural processes are algorithmic. I will argue that there are more robust ways to understand algorithmic processes that make the claim that evolution is algorithmic empirical and not conceptual. While laws of nature can be seen as compression algorithms of information about the world, it does not follow logically that they are implemented as algorithms by physical processes. For that to be true, the processes have to be part of computational systems. The basic difference between mere simulation and real computing is having proper causal structure. I will show what kind of requirements this poses for natural evolutionary processes if they are to be computational.
Miłkowski, Marcin (2007). Is computationalism trivial? In Gordana Dodig Crnkovic & Susan Stuart (eds.), Computation, Information, Cognition: The Nexus and the Liminal. Cambridge Scholars Press.   (Google)
Abstract: In this paper, I want to deal with the triviality threat to computationalism. On one hand, the controversial and vague claim that cognition involves computation is still denied. On the other, contemporary physicists and philosophers alike claim that all physical processes are indeed computational or algorithmic. This claim would justify the computationalism claim by making it utterly trivial. I will show that even if these two claims were true, computationalism would not have to be trivial.
Miscevic, Nenad (1996). Computationalism and the Kripke-Wittgenstein paradox. Proceedings of the Aristotelian Society 96:215-29.   (Cited by 1 | Google)
Piccinini, Gualtiero (2007). Computing mechanisms. Philosophy of Science 74 (4):501-526.   (Cited by 3 | Google | More links)
Abstract: This paper offers an account of what it is for a physical system to be a computing mechanism—a system that performs computations. A computing mechanism is a mechanism whose function is to generate output strings from input strings and (possibly) internal states, in accordance with a general rule that applies to all relevant strings and depends on the input strings and (possibly) internal states for its application. This account is motivated by reasons endogenous to the philosophy of computing, namely, doing justice to the practices of computer scientists and computability theorists. It is also an application of recent literature on mechanisms, because it assimilates computational explanation to mechanistic explanation. The account can be used to individuate computing mechanisms and the functions they compute and to taxonomize computing mechanisms based on their computing power.
Piccinini, Gualtiero (2008). Computation without representation. Philosophical Studies 137 (2):205-241.   (Cited by 7 | Google | More links)
Abstract: The received view is that computational states are individuated at least in part by their semantic properties. I offer an alternative, according to which computational states are individuated by their functional properties. Functional properties are specified by a mechanistic explanation without appealing to any semantic properties. The primary purpose of this paper is to formulate the alternative view of computational individuation, point out that it supports a robust notion of computational explanation, and defend it on the grounds of how computational states are individuated within computability theory and computer science. A secondary purpose is to show that existing arguments for the semantic view are defective.
Schweizer, Paul (2002). Consciousness and computation. Minds and Machines 12 (1).   (Google | More links)
Scheutz, Matthias (1998). Implementation: Computationalism's weak spot. Conceptus JG 31 (79):229-239.   (Cited by 3 | Google)
Scheutz, Matthias (1999). When physical systems realize functions. Minds and Machines 9 (2):161-196.   (Cited by 20 | Google | More links)
Abstract:   After briefly discussing the relevance of the notions computation and implementation for cognitive science, I summarize some of the problems that have been found in their most common interpretations. In particular, I argue that standard notions of computation together with a state-to-state correspondence view of implementation cannot overcome difficulties posed by Putnam's Realization Theorem and that, therefore, a different approach to implementation is required. The notion realization of a function, developed out of physical theories, is then introduced as a replacement for the notional pair computation-implementation. After gradual refinement, taking practical constraints into account, this notion gives rise to the notion digital system which singles out physical systems that could be actually used, and possibly even built
Searle, John R. (1990). Is the brain a digital computer? Proceedings and Addresses of the American Philosophical Association 64 (November):21-37.   (Cited by 86 | Annotation | Google | More links)
Abstract: There are different ways to present a Presidential Address to the APA; the one I have chosen is simply to report on work that I am doing right now, on work in progress. I am going to present some of my further explorations into the computational model of the mind.\**
Sekanina, Lukáš (forthcoming). Evolved computing devices and the implementation problem. Minds and Machines.   (Google)
Abstract: The evolutionary circuit design is an approach allowing engineers to realize computational devices. The evolved computational devices represent a distinctive class of devices that exhibits a specific combination of properties, not visible and studied in the scope of all computational devices up till now. Devices that belong to this class show the required behavior; however, in general, we do not understand how and why they perform the required computation. The reason is that the evolution can utilize, in addition to the “understandable composition of elementary components”, material-dependent constructions and properties of environment (such as temperature, electromagnetic field etc.) and, furthermore, unknown physical behaviors to establish the required functionality. Therefore, nothing is known about the mapping between an abstract computational model and its physical implementation. The standard notion of computation and implementation developed in computer science as well as in cognitive science has become very problematic with the existence of evolved computational devices. According to the common understanding, the evolved devices cannot be classified as computing mechanisms
Shagrir, Oron (1997). Two dogmas of computationalism. Minds and Machines 7 (3):321-44.   (Cited by 11 | Google | More links)
Abstract:   This paper challenges two orthodox theses: (a) that computational processes must be algorithmic; and (b) that all computed functions must be Turing-computable. Section 2 advances the claim that the works in computability theory, including Turing's analysis of the effective computable functions, do not substantiate the two theses. It is then shown (Section 3) that we can describe a system that computes a number-theoretic function which is not Turing-computable. The argument against the first thesis proceeds in two stages. It is first shown (Section 4) that whether a process is algorithmic depends on the way we describe the process. It is then argued (Section 5) that systems compute even if their processes are not described as algorithmic. The paper concludes with a suggestion for a semantic approach to computation
Shagrir, Oron (1999). What is computer science about? The Monist 82 (1):131-149.   (Cited by 7 | Google)
Sloman, Aaron (ms). Supervenience and implementation.   (Cited by 5 | Google | More links)
Abstract: How can a virtual machine X be implemented in a physical machine Y? We know the answer as far as compilers, editors, theorem-provers, operating systems are concerned, at least insofar as we know how to produce these implemented virtual machines, and no mysteries are involved. This paper is about extrapolating from that knowledge to the implementation of minds in brains. By linking the philosopher's concept of supervenience to the engineer's concept of implementation, we can illuminate both. In particular, by showing how virtual machines can be implemented in causally complete physical machines, and still have causal powers, we remove some philosophical problems about how mental processes can be real and can have real effects in the world even if the underlying physical implementation has no causal gaps. This requires a theory of ontological levels
Sloman, Aaron (online). What are virtual machines? Are they real?   (Cited by 4 | Google | More links)
Stabler, Edward P. (1987). Kripke on functionalism and automata. Synthese 70 (January):1-22.   (Cited by 3 | Annotation | Google | More links)
Abstract:   Saul Kripke has proposed an argument to show that there is a serious problem with many computational accounts of physical systems and with functionalist theories in the philosophy of mind. The problem with computational accounts is roughly that they provide no noncircular way to maintain that any particular function with an infinite domain is realized by any physical system, and functionalism has the similar problem because of the character of the functional systems that are supposed to be realized by organisms. This paper shows that the standard account of what it is for a physical system to compute a function can avoid Kripke's criticisms without being reduced to circularity; a very minor and natural elaboration of the standard account suffices to save both functionalist theories and computational accounts generally
Suber, Peter (1988). What is software? Journal of Speculative Philosophy 2:89-119.   (Cited by 4 | Google)
Welch, Philip D. (2004). On the possibility, or otherwise, of hypercomputation. British Journal for the Philosophy of Science 55 (4):739-746.   (Cited by 8 | Google | More links)
Abstract: We claim that a recent article of P. Cotogno ([2003]) in this journal is based on an incorrect argument concerning the non-computability of diagonal functions. The point is that whilst diagonal functions are not computable by any function of the class over which they diagonalise, there is no ?logical incomputability? in their being computed over a wider class. Hence this ?logical incomputability? regrettably cannot be used in his argument that no hypercomputation can compute the Halting problem. This seems to lead him into a further error in his analysis of the supposed conventional status of the infinite time Turing machines of Hamkins and Lewis ([2000]). Theorem 1 refutes this directly. The diagonalisation misunderstanding Infinite computation Conclusion

6.5a.1 Computation and Physical Systems, Misc

Moura, Ivan (2006). A model of agent consciousness and its implementation. Neurocomputing 69 (16-18):1984-1995.   (Google)
Piccinini, Gualtiero (2008). Some Neural Networks Compute, Others Don't. Neural Networks 21 (2-3):311-321.   (Google)
Abstract: I address whether neural networks perform computations in the sense of computability theory and computer science. I explicate and defend
the following theses. (1) Many neural networks compute—they perform computations. (2) Some neural networks compute in a classical way.
Ordinary digital computers, which are very large networks of logic gates, belong in this class of neural networks. (3) Other neural networks
compute in a non-classical way. (4) Yet other neural networks do not perform computations. Brains may well fall into this last class.

6.5a.2 Analog and Digital Computation

Aihara, Kazuyuki & Ryeu, Jun Kyung (2001). Chaotic neurons and analog computation. Behavioral and Brain Sciences 24 (5):810-811.   (Google)
Abstract: Chaotic dynamics can be related to analog computation. A possibility of electronically implementing the chaos-driven contracting system in the target article is explored with an analog electronic circuit with inevitable noise from the viewpoint of analog computation with chaotic neurons
Chalmers, David J. (manuscript). Analog vs. digital computation. .   (Google)
Abstract: It is fairly well-known that certain hard computational problems (that is, 'difficult' problems for a digital processor to solve) can in fact be solved much more easily with an analog machine. This raises questions about the true nature of the distinction between analog and digital computation (if such a distinction exists). I try to analyze the source of the observed difference in terms of (1) expanding parallelism and (2) more generally, infinite-state Turing machines. The issue of discreteness vs continuity will also be touched upon, although it is not so important for analyzing these particular problems
Demopoulos, William (1987). On some fundamental distinctions of computationalism. Synthese 70 (January):79-96.   (Cited by 9 | Annotation | Google | More links)
Abstract:   The following paper presents a characterization of three distinctions fundamental to computationalism, viz., the distinction between analog and digital machines, representation and nonrepresentation-using systems, and direct and indirect perceptual processes. Each distinction is shown to rest on nothing more than the methodological principles which justify the explanatory framework of the special sciences
Eliasmith, Chris (2000). Is the brain analog or digital? Cognitive Science Quarterly 1 (2):147-170.   (Google | More links)
Abstract: It will always remain a remarkable phenomenon in the history of philosophy, that there was a time, when even mathematicians, who at the same time were philosophers, began to doubt, not of the accuracy of their geometrical propositions so far as they concerned space, but of their objective validity and the applicability of this concept itself, and of all its corollaries, to nature. They showed much concern whether a line in nature might not consist of physical points, and consequently that true space in the object might consist of simple [discrete] parts, while the space which the geometer has in his mind [being continuous] cannot be such
Haugeland, John (1981). Analog and analog. Philosophical Topics 12 (1):213-226.   (Cited by 21 | Google)
Katz, Matthew (2008). Analog and digital representation. Minds and Machines 18 (3).   (Google)
Abstract: In this paper, I argue for three claims. The first is that the difference between analog and digital representation lies in the format and not the medium of representation. The second is that whether a given system is analog or digital will sometimes depend on facts about the user of that system. The third is that the first two claims are implicit in Haugeland's (1998) account of the distinction
Lewis, David (1971). Analog and digital. Noûs 5 (3):321-327.   (Google | More links)
MacLennan, Bruce J. (1994). Words lie in our way. Minds and Machines 4 (4):421-37.   (Cited by 10 | Google | More links)
Abstract:   The central claim of computationalism is generally taken to be that the brain is a computer, and that any computer implementing the appropriate program would ipso facto have a mind. In this paper I argue for the following propositions: (1) The central claim of computationalism is not about computers, a concept too imprecise for a scientific claim of this sort, but is about physical calculi (instantiated discrete formal systems). (2) In matters of formality, interpretability, and so forth, analog computation and digital computation are not essentially different, and so arguments such as Searle''s hold or not as well for one as for the other. (3) Whether or not a biological system (such as the brain) is computational is a scientific matter of fact. (4) A substantive scientific question for cognitive science is whether cognition is better modeled by discrete representations or by continuous representations. (5) Cognitive science and AI need a theoretical construct that is the continuous analog of a calculus. The discussion of these propositions will illuminate several terminology traps, in which it''s all too easy to become ensnared
Maley, Corey J. (forthcoming). Analog and digital, continuous and discrete. Philosophical Studies.   (Google | More links)
Abstract: Representation is central to contemporary theorizing about the mind/brain. But the nature of representation--both in the mind/brain and more generally--is a source of ongoing controversy. One way of categorizing representational types is to distinguish between the analog and the digital: the received view is that analog representations vary smoothly, while digital representations vary in a step-wise manner. I argue that this characterization is inadequate to account for the ways in which representation is used in cognitive science; in its place, I suggest an alternative taxonomy. I will defend and extend David Lewis's account of analog and digital representation, distinguishing analog from continuous representation, as well as digital from discrete representation. I will argue that the distinctions available in this four-fold account accord with representational features of theoretical interest in cognitive science more usefully than the received analog/digital dichotomy
McDermott, Drew (2001). The digital computer as red Herring. Psycoloquy 12 (54).   (Cited by 1 | Google | More links)
Abstract: Stevan Harnad correctly perceives a deep problem in computationalism, the hypothesis that cognition is computation, namely, that the symbols manipulated by a computational entity do not automatically mean anything. Perhaps, he proposes, transducers and neural nets will not have this problem. His analysis goes wrong from the start, because computationalism is not as rigid a set of theories as he thinks. Transducers and neural nets are just two kinds of computational system, among many, and any solution to the semantic problem that works for them will work for most other computational systems
Piccinini, Gualtiero (2008). Computers. Pacific Philosophical Quarterly 89 (1):32–73.   (Cited by 2 | Google | More links)
Abstract: I offer an explication of the notion of computer, grounded in the practices of computability theorists and computer scientists. I begin by explaining what distinguishes computers from calculators. Then, I offer a systematic taxonomy of kinds of computer, including hard-wired versus programmable, general-purpose versus special-purpose, analog versus digital, and serial versus parallel, giving explicit criteria for each kind. My account is mechanistic: which class a system belongs in, and which functions are computable by which system, depends on the system's mechanistic properties. Finally, I briefly illustrate how my account sheds light on some issues in the history and philosophy of computing as well as the philosophy of mind.
Rubel, Lee A. (1989). Digital simulation of analog computation and church's thesis. Journal of Symbolic Logic 54 (3):1011-1017.   (Google | More links)
Sarpeshkar, Rahul (1998). Analog versus digital: Extrapolating from electronics to neurobiology. Neural Computation 10 (7):1601--1638.   (Google)
Abstract: We review the pros and cons of analog and digital computation. We propose that computation that is most efficient in its use of resources is neither analog computation nor digital computation but, rather, a mixture of the two forms. For maximum efficiency, the information and information-processing resources of the hybrid form must be distributed over many wires, with an optimal signal-to-noise ratio per wire. Our results suggest that it is likely that the brain computes in a hybrid fashion and that an underappreciated and important reason for the efficiency of the human brain, which consumes only 12 W, is the hybrid and distributed nature of its architecture.
Siegelmann, Hava T. (2003). Neural and super-Turing computing. Minds and Machines 13 (1):103-114.   (Cited by 10 | Google | More links)
Abstract:   ``Neural computing'' is a research field based on perceiving the human brain as an information system. This system reads its input continuously via the different senses, encodes data into various biophysical variables such as membrane potentials or neural firing rates, stores information using different kinds of memories (e.g., short-term memory, long-term memory, associative memory), performs some operations called ``computation'', and outputs onto various channels, including motor control commands, decisions, thoughts, and feelings. We show a natural model of neural computing that gives rise to hyper-computation. Rigorous mathematical analysis is applied, explicating our model's exact computational power and how it changes with the change of parameters. Our analog neural network allows for supra-Turing power while keeping track of computational constraints, and thus embeds a possible answer to the superiority of the biological intelligence within the framework of classical computer science. We further propose it as standard in the field of analog computation, functioning in a role similar to that of the universal Turing machine in digital computation. In particular an analog of the Church-Turing thesis of digital computation is stated where the neural network takes place of the Turing machine
Trenholme, Russell (1994). Analog simulation. Philosophy of Science 61 (1):115-131.   (Cited by 4 | Google | More links)
Abstract: The distinction between analog and digital representation is reexamined; it emerges that a more fundamental distinction is that between symbolic and analog simulation. Analog simulation is analyzed in terms of a (near) isomorphism of causal structures between a simulating and a simulated process. It is then argued that a core concept, naturalistic analog simulation, may play a role in a bottom-up theory of adaptive behavior which provides an alternative to representational analyses. The appendix discusses some formal conditions for naturalistic analog simulation.

6.5a.3 Computers

Piccinini, Gualtiero (2008). Some Neural Networks Compute, Others Don't. Neural Networks 21 (2-3):311-321.   (Google)
Abstract: I address whether neural networks perform computations in the sense of computability theory and computer science. I explicate and defend
the following theses. (1) Many neural networks compute—they perform computations. (2) Some neural networks compute in a classical way.
Ordinary digital computers, which are very large networks of logic gates, belong in this class of neural networks. (3) Other neural networks
compute in a non-classical way. (4) Yet other neural networks do not perform computations. Brains may well fall into this last class.

6.5a.4 Implementing Computations

Chalmers, David J. (ms). A computational foundation for the study of cognition.   (Google | More links)
Abstract: Computation is central to the foundations of modern cognitive science, but its role is controversial. Questions about computation abound: What is it for a physical system to implement a computation? Is computation sufficient for thought? What is the role of computation in a theory of cognition? What is the relation between different sorts of computational theory, such as connectionism and symbolic computation? In this paper I develop a systematic framework that addresses all of these questions. Justifying the role of computation requires analysis of implementation, the nexus between abstract computations and concrete physical systems. I give such an analysis, based on the idea that a system implements a computation if the causal structure of the system mirrors the formal structure of the computation. This account can be used to justify the central commitments of artificial intelligence and computational cognitive science: the thesis of computational sufficiency, which holds that the right kind of computational structure suffices for the possession of a mind, and the thesis of computational explanation, which holds that computation provides a general framework for the explanation of cognitive processes. The theses are consequences of the facts that (a) computation can specify general patterns of causal organization, and (b) mentality is an organizational invariant, rooted in such patterns. Along the way I answer various challenges to the computationalist position, such as those put forward by Searle. I close by advocating a kind of minimal computationalism, compatible with a very wide variety of empirical approaches to the mind. This allows computation to serve as a true foundation for cognitive science
Mallah, Jacques (ms). The partial brain thought experiment: Partial consciousness and its implications.   (Google | More links)
Abstract: The ‘Fading Qualia’ thought experiment of Chalmers purports to show that computationalism is very probably true even if dualism is true by considering a series of brains, with biological parts increasingly substituted for by artificial but functionally analagous parts in small steps, and arguing that consciousness would not plausibly vanish in either a gradual or sudden way. This defense of computationalism inspired an attack on computationalism by Bishop, who argued that a similar series of substitutions by parts that have the correct physical activity but not the correct causal relationships must likewise preserve consciousness, purportedly showing that ‘Counterfactuals Cannot Count’ and if so ruining a necessary condition for computation to meaningfully distinguish between physical systems. In this paper, the case in which a series of parts are simply removed and substituted for only by imposing the correct boundary conditions to exactly preserve the functioning of the remaining partial brain is described. It is argued that consciousness must gradually vanish in this case, not by fading but by becoming more and more partial. This supports the non-centralized nature of consciousness, tends to support the plausibility of physicalism against dualism, and provides the proper counterargument to Bishop’s contention. It also provides an avenue of attack against the “Fading Qualia” argument for those who remain dualists
Moura, Ivan (2006). A model of agent consciousness and its implementation. Neurocomputing 69 (16-18):1984-1995.   (Google)
Sergeyev, Yaroslav D. (2008). A new applied approach for executing computations with infinite and infinitesimal quantities. Informatica 19 (4):567-596.   (Google | More links)
Abstract: A new computational methodology for executing calculations with infinite and infinitesimal quantities is described in this paper. It is based on the principle ‘The part is less than the whole’ introduced by Ancient Greeks and applied to all numbers (finite, infinite, and infinitesimal) and to all sets and processes (finite and infinite). It is shown that it becomes possible to write down finite, infinite, and infinitesimal numbers by a finite number of symbols as particular cases of a unique framework. The new methodology has allowed us to introduce the Infinity Computer working with such numbers (its simulator has already been realized). Examples dealing with divergent series, infinite sets, and limits are given.

6.5a.5 Noncomputable Processes

Black, Robert (2000). Proving church's thesis. Philosophia Mathematica 8 (3).   (Google)
Abstract: Arguments to the effect that Church's thesis is intrinsically unprovable because proof cannot relate an informal, intuitive concept to a mathematically defined one are unconvincing, since other 'theses' of this kind have indeed been proved, and Church's thesis has been proved in one direction. However, though evidence for the truth of the thesis in the other direction is overwhelming, it does not yet amount to proof

6.5a.6 Pancomputationalism

Bishop, Michael A. (2003). Dancing with pixies: Strong artificial intelligence and panpsychism. In John M. Preston & Michael A. Bishop (eds.), Views Into the Chinese Room: New Essays on Searle and Artificial Intelligence. Oxford University Press.   (Google)
Dodig-Crnkovic, Gordana (2008). Empirical Modeling and Information Semantics. Mind & Society 7 (2):157.   (Google)
Abstract: This paper investigates the relationship between reality and model, information and truth. It will argue that meaningful data need not be true in order to constitute information. Information to which truth-value cannot be ascribed, partially true information or even false information can lead to an interesting outcome such as technological innovation or scientific breakthrough. In the research process, during the transition between two theoretical frameworks, there is a dynamic mixture of old and new concepts in which truth is not well defined. Instead of veridicity, correctness of a model and its appropriateness within a context are commonly required. Despite empirical models being in general only truthlike, they are nevertheless capable of producing results from which conclusions can be drawn and adequate decisions made.
Dodig-Crnkovic, Gordana (2008). Knowledge Generation as Natural Computation. Journal of Systemics, Cybernetics and Informatics 6 (2).   (Google)
Abstract: Knowledge generation can be naturalized by adopting computational model of cognition and evolutionary approach. In this framework knowledge is seen as a result of the structuring of input data (data → information → knowledge) by an interactive computational process going on in the agent during the adaptive interplay with the environment, which clearly presents developmental advantage by increasing agent’s ability to cope with the situation dynamics. This paper addresses the mechanism of knowledge generation, a process that may be modeled as natural computation in order to be better understood and improved
Dodig-Crnkovic, Gordana (online). Semantics of Information as Interactive Computation. Proceedings of the Fifth International Workshop on Philosophy and Informatics.   (Google)
Abstract: Computers today are not only the calculation tools - they are directly (inter)acting in the physical world which itself may be conceived of as the universal computer (Zuse, Fredkin, Wolfram, Chaitin, Lloyd). In expanding its domains from abstract logical symbol manipulation to physical embedded and networked devices, computing goes beyond Church-Turing limit (Copeland, Siegelman, Burgin, Schachter). Computational processes are distributed, reactive, interactive, agent-based and concurrent. The main criterion of success of computation is not its termination, but the adequacy of its response, its speed, generality and flexibility; adaptability, and tolerance to noise, error,faults, and damage. Interactive computing is a generalization of Turing computing, and it calls for new conceptualizations (Goldin, Wegner). In the info-computationalist framework, with computation seen as information processing, natural computation appears as the most suitable paradigm of computation and information semantics requires logical pluralism.
Dodig-Crnkovic, Gordana (2003). Shifting the paradigm of philosophy of science: Philosophy of information and a new renaissance. Minds and Machines 13 (4).   (Google)
Abstract:   Computing is changing the traditional field of Philosophy of Science in a very profound way. First as a methodological tool, computing makes possible ``experimental Philosophy'' which is able to provide practical tests for different philosophical ideas. At the same time the ideal object of investigation of the Philosophy of Science is changing. For a long period of time the ideal science was Physics (e.g., Popper, Carnap, Kuhn, and Chalmers). Now the focus is shifting to the field of Computing/Informatics. There are many good reasons for this paradigm shift, one of those being a long standing need of a new meeting between the sciences and humanities, for which the new discipline of Computing/Informatics gives innumerable possibilities. Contrary to Physics, Computing/Informatics is very much human-centered. It brings a potential for a new Renaissance, where Science and Humanities, Arts and Engineering can reach a new synthesis, so very much needed in our intellectually split culture. This paper investigates contemporary trends and the relation between the Philosophy of Science and the Philosophy of Computing and Information, which is equivalent to the present relation between Philosophy of Science and Philosophy of Physics
Grigg, Rowan (ms). A case for lattice schemes in fundamental physics.   (Google)
Abstract: A synthesis of trending topics in pancomputationalism. I introduce the notion that "strange loops" engender the most atomic levels of physical reality, and introduce a mechanism for global non-locality. Writen in a simple and accesssible style, it seeks to draw research in fundamental physics back to realism, and have a bit of fun in the process.
Grigg, Rowan (ms). The Universal Lattice.   (Google)

6.5a.7 Quantum Computation

Bub, Jeffrey (2008). Quantum computation and pseudotelepathic games. Philosophy of Science 75 (4).   (Google)
Abstract: A quantum algorithm succeeds not because the superposition principle allows ‘the computation of all values of a function at once’ via ‘quantum parallelism’, but rather because the structure of a quantum state space allows new sorts of correlations associated with entanglement, with new possibilities for information‐processing transformations between correlations, that are not possible in a classical state space. I illustrate this with an elementary example of a problem for which a quantum algorithm is more efficient than any classical algorithm. I also introduce the notion of ‘pseudotelepathic’ games and show how the difference between classical and quantum correlations plays a similar role here for games that can be won by quantum players exploiting entanglement, but not by classical players whose only allowed common resource consists of shared strings of random numbers (common causes of the players’ correlated responses in a game). *Received October 2008. †To contact the author, please write to: Department of Philosophy, University of Maryland, College Park, MD 20742; e‐mail: jbub@umd.edu
Duwell, Armond (2007). The many-worlds interpretation and quantum computation. Philosophy of Science 74 (5).   (Google)
Abstract: David Deutsch and others have suggested that the Many-Worlds Interpretation of quantum mechanics is the only interpretation capable of explaining the special efficiency quantum computers seem to enjoy over classical ones. I argue that this view is not tenable. Using a toy algorithm I show that the Many-Worlds Interpretation must crucially use the ontological status of the universal state vector to explain quantum computational efficiency, as opposed to the particular ontology of the MWI, that is, the computational histories of worlds. As such, any other interpretation that treats the state vector as representing real ontological features of a system can explain quantum speedup too. ‡Thanks to Soazig Le Bihan for her critical comments on this paper. †To contact the author, please write to: Department of Philosophy, Liberal Arts 101, University of Montana, Missoula, MT 59812; e-mail: armond.duwell@umontana.edu
Duwell, A. (2003). The physics of quantum information: Quantum cryptography, quantum teleportation, quantum computation - D. bouwmeester, A. Ekert and A. Zeilinger (eds.); Germany, 2000, 314pp, US$ 54, ISBN 3-540-66778-. Studies in History and Philosophy of Science Part B 34 (2):331-334.   (Google)
Fernández, Eliseo (2008). A triadic theory of elementary particle interactions and quantum computation (review). Transactions of the Charles S. Peirce Society 44 (2):pp. 384-389.   (Google)
Grigg, Rowan (ms). The Universal Lattice.   (Google)
Hameroff, Stuart R. (online). Consciousness, Whitehead and quantum computation in the brain: Panprotopsychism meets the physics of fundamental spacetime geometry.   (Cited by 2 | Google)
Abstract: _dualism_ (consciousness lies outside knowable science), _emergence_ (consciousness arises as a novel property from complex computational dynamics in the brain), and some form of _panpsychism_, _pan-protopsychism, or pan-experientialism_ (essential features or precursors of consciousness are fundamental components of reality which are accessed by brain processes). In addition to 1) the problem of subjective experience, other related enigmatic features of consciousness persist, defying technological and philosophical inroads. These include 2) the “binding problem”—how disparate brain activities give rise to a unified sense of “self” or unified conscious content. Temporal synchrony—brain-wide coherence of neural membrane electrical activities—is often assumed to accomplish binding, but _what_ is being synchronized? What is being coherently bound? Another enigmatic feature is 3) the transition from pre-conscious processes to consciousness itself. Most neuroscientists agree that consciousness is the “tip of an iceberg”, that the vast majority of brain activities is
Hameroff, Stuart R. (2002). Quantum computation in brain microtubules. Physical Review E 65 (6).   (Cited by 11 | Google)
Abstract: Proposals for quantum computation rely on superposed states implementing multiple computations simultaneously, in parallel, according to quantum linear superposition (e.g., Benioff, 1982; Feynman, 1986; Deutsch, 1985, Deutsch and Josza, 1992). In principle, quantum computation is capable of specific applications beyond the reach of classical computing (e.g., Shor, 1994). A number of technological systems aimed at realizing these proposals have been suggested and are being evaluated as possible substrates for quantum computers (e.g. trapped ions, electron spins, quantum dots, nuclear spins, etc., see Table 1; Bennett, 1995; and Barenco, 1995). The main obstacle to realization of quantum computation is the problem of interfacing to the system (input, output) while also protecting the quantum state from environmental decoherence. If this problem can be overcome, then present day classical computers may evolve to quantum computers
J., M. (2001). On bits and quanta - hoi-kwong lo, Sandu Popescu and Tim Spiller (eds), introduction to quantum computation and information (singapore: World scientific, 1998), XI+348 pp., ISBN 981-02-3399-X, £35, US$52. Studies in History and Philosophy of Science Part B 32 (1):143-150.   (Google)
Ledda, Antonio; Konig, Martinvaldo; Paoli, Francesco & Giuntini, Roberto (2006). MV-Algebras and quantum computation. Studia Logica 82 (2).   (Google)
Abstract: We introduce a generalization of MV algebras motivated by the investigations into the structure of quantum logical gates. After laying down the foundations of the structure theory for such quasi-MV algebras, we show that every quasi-MV algebra is embeddable into the direct product of an MV algebra and a “flat” quasi-MV algebra, and prove a completeness result w.r.t. a standard quasi-MV algebra over the complex numbers