Analog vs. Digital Computation

David J. Chalmers (February 1989)

It is fairly well-known that certain hard computational problems (that is, 'difficult' problems for a digital processor to solve) can in fact be solved much more easily with an analog machine. This raises questions about the true nature of the distinction between analog and digital computation (if such a distinction exists). I try to analyze the source of the observed difference in terms of (1) expanding parallelism and (2) more generally, infinite-state Turing machines. The issue of discreteness vs continuity will also be touched upon, although it is not so important for analyzing these particular problems.

We find that to truly explain these problems we have to answer hard questions about the nature of the universe.

Analog solutions to hard computational problems. (e.g. NP-Complete ones).

Examples:

    These solutions all cheat by using PARALLEL computation, inherent in nature. But, you say: these problems can be shown to be just as hard on parallel machines as on serial machines (things just get speeded up by a constant factor).

    But aha - there is something special going on here. Nature is not just providing us with an ordinary parallel computer. It is an EXPANDING parallel computer whose parallelism expands to meet our needs. In the spaghetti sorter, for instance, we essentially have n parallel processors, one for each stick of spaghetti. As n increases, so does the number of processors. This is unlike any typical digital computer in our experience.

    Using natural parallelism, here is another, digital solution: Building n digital processors is only going to take time n (with a rather large constant factor). Once this was done then many of these hard problems would become easier. With a self-replicating machine, humans could even be taken out of the loop altogether!

    Obviously both of these solutions are cheating mildly when compared with the original formulation. The original formulation only applied to nice, Turing-Machine-like algorithmic digital devices. The first solution uses "arbitrary" properties of the real world; and even the second "jumps out of the system" by physically fabricating new machines. But still... does this not mean that perhaps the original definition of 'algorithm' was a little restricted (Church and Turing turn in their graves)?

    Anyway: the essence of the trick lies in EXPANDING PARALLELISM. We could give a formal definition of such machines and see what theorems apply to them.

    Question: although we talk of (indefinitely) expanding parallelism, is it not the case that we start with one huge parallel machine to start with, namely the physical universe? We are not so much expanding our machine as making better and better use of its resources. If the universe is infinite, this is OK - improvement can go on indefinitely. But if the universe is huge but finite, then "expanding parallelism" is only an illusion. In the end we are going to exhaust all the spaghetti sticks in existence! This would be analogous to having a computer which was hard-wired to solve a problem extremely fast for n < 100, but which failed for all n >= 100.

    (General principle for all these analog 'solutions': Think hard about what happens for very large n.


    Basic question: is the universe an infinite-state-machine or a finite-state-machine? If the second, then it is WEAKER than a Turing machine, so these analog solutions are essentially weak theoretically. If the first, then is it true that it is STRONGER than a Turing machine? At the very least, it seems that it has different theorems of computational complexity.

    The only way a Turing machine can accomodate 'parallelism' is through its states. But as a TM has by definition only a finite number of states, then indefinitely expanding parallelism is out of the question. (Of course, such parallelism can be 'simulated' serially by use of its tape, but this is not helpful for questions of computational complexity, although this is good enough for questions of computability.)

    If we have an infinite number of states, it seems that indefinitely expanding parallelism can be accomodated. As we've seen, this leads to better results for questions of computational complexity. It is an interesting question whether such a machine was more powerful computationally (in terms of the problems it could solve, independent of resource-use).

    Presumably we have to specify our definition a little better. First: do we allow uncountable number of states? This might behave even better, but presumably this is outside the spirit of computation, which has it that algorithms must be well-defined and in a sense 'constructible'. The well-known inconstructibility of uncountable sets (or at least of most of their members) would seem to mean that such a machine is inherently "impractical".

    OK, given a countable number of states - we'll index them by the positive integers. Next question: what do we allow as our transition functions? Do they have to be computable functions, or do we allow any function Z x ... -> Z to be 'hard-wired'. This question does not arise for finite-state machines, because here any function is computable. My intuition here is that again, to be computationally 'reasonable', such functions must be computable and maybe even primitive recursive.

    Incidentally, where does the physical universe fall into these classes? Assuming it is (a) infinite and (b) discrete (this is arguable) then I believe that, although the number of 'possible' states is uncountable (from a calculation like 2-to-the-infinity), the number of 'reachable' states (from constructible initial situations) is countable. And given that the laws of physics are reasonable, the transition function is computable and indeed primitive recursive (though it may be non-deterministic).

    ----

    My basic idea of an infinite-state-machine is like this. As we already have an infinite number of states, we have no need for a tape - we can simply incorporate this into the states. One obvious way of specifying a state is by an infinite string of 0's and 1's. (Note that this gives an uncountable number of states, however). We simply require that every string at time t determines another string at time t+1.

    (In imagining this, we imagine ourselves 'outside' the system and determining its initial state, or at least a relevant portion of it. Of course in the case of the physical universe we are in fact INSIDE the system; but given the 'illusion' of free will we can treat it as if we were manipulating things from the outside).

    The idea of having "reasonable" laws of physics corresponds to the idea of having a computable transition function. So I require that the digit at position P in the string be a primitive recursive function of the previous string. (To make this totally meaningful, we may have to make it determined LOCALLY; a digit P can only depend on its N nearest neighbours.)

    This definition corresponds fairly closely to an infinite discrete universe. Another possibility would be the 'potentially infinite universe'. Here we require that all initial configurations consists of 0's outside some finite range. The 'local' nature of the laws dictate that information would always be contained within a finite range, but this finite range could be always expanding. Perhaps, given the Big Bang theory, this corresponds closest of all to our universe. The computational power here is not obvious. It seems that indefinitely expanding parallelism is still accomodatable in such a universe, but my intuition is that it should be computationally weaker.

    Incidentally, the potentially infinite universe is nice also because it only requires a countable number of states. I believe that this can be shown to be equivalent to the infinite-state machine where states are indexed by integers (a few paragraphs above.)


    If in fact it turns out that the universe is non-discrete, it would take a strong argument to convince me that this led to inherently different results to a discrete universe. Everything we have seen indicates that all interesting properties of non-discreteness can be simulated using low-level discreteness, especially using non-determinism. But I may be wrong here. A finite but non-discrete universe might be worth analysing. As we've seen, a finite discrete universe is equivalent only to a (rather big) finite state automaton, so that despite our initial illusion of powerfulness, it is in fact weaker than a Turing Machine. If the universe was finite and non-discrete (as may well in fact be the case), then on the face of it, this is a powerful infinite-state machine, more powerful than a Turing machine. The question is whether the continuity on smaller and smaller levels can even theoretically be utilized by us computationally, or whether (for instance) randomness, 'chaos' and noise would make this system no more powerful than the discrete case.


    Summing up, we have a few cases.

    Infinite discrete universe: this is an infinite-state machine, with an uncountable number of states. It seems that it can accomodate indefinitely expanding parallelism. Results in computational complexity here are less limitative than those about ordinary Turing machines. It is not initially obvious whether computability power is greater or equivalent.

    Potentially infinite discrete universe: this is an infinite-state machine with a countable number of states. This probably can accomodate indefinitely expanding parallelism, although the computational power may be a little weaker than above.

    Finite discrete universe: this is equivalent to a finite-state automaton, so it is weaker than a Turing machine despite the initial impression of computational power.

    Finite continuous universe: harder to analyze. Perhaps equivalent to the finite discrete case, perhaps more powerful.

    Infinite continuous universe: could this be more powerful than the infinite discrete case? This is hard to say. The discrete case was extremely powerful already. Perhaps once we have infinitude already, then continuity can give us nothing extra.

    The question of whether continuity is inherently more powerful than discreteness is a fascinating one which needs to be analyzed. Here, however I have mainly been concerned with issues of finite vs. infinite, serial vs parallel - these are the issues which underlie the 'powerful' analog machines mentioned at the start. Discreteness vs continuity may be another important aspect of the analog vs digital issue, but there is no hard evidence for