How to Make a Soul

By Andrew Brown


Among the professionals, philosophical argument is a martial art. Nervous spectators pull their beer bottles out of the way in case a backhand logic chop should sweep them off the table. This is personal.

The young man defending himself in the Empire Bar is dressed like a jobbing rock star: t-shirt, tight jeans around skinny legs, great curly masses of brown hair reaching down his back. But he moves like a fighter; every point launched against him is blocked, deftly, with his outspread palms.

His opponent is older, still wiry, with black hair and moustache. The more the young man blocks, the harder the older man's arms throw new points. Soon he is arguing from his shoulders, like a boxer. Finally, he seizes a beer bottle and thrusts it in front of his opponent. "Look," he shouts. "There's only one thing I want to know. Do you think this beer bottle has consciousness?"

There's a pause. "Well, it might have," says David Chalmers. A ripple of appreciative relaxation runs round the audience. The bout is over. One of the spectators takes the bottle from Bruce Mangan's hand and carefully tears a strip from the label. He waves it in front of Chalmers. "So what happens now?" he asks. "Where has the consciousness gone in the paper? Has the label got its own little consciousness?"

It's a game. It is also extremely serious. The crowd around Chalmers and Mangan at the Empire Bar is part of a congregation of neuroscientists, philosophers, psychologists, quantum physicists and AI gurus that has come in pilgrimage to Tucson for a conference called Towards a Science of Consciousness. There are almost a thousand people attending, and each participant worth talking to has at least half a dozen good theories, mostly incompatible. What holds all the theories and disciplines together is simple. These people think that the nature of consciousness is the most exciting intellectual frontier in the world today. It is here that science seems to be closing in on the essence of what makes us human.

Yet there is a paradox at the heart of this work. The more the researchers learn, the further the true subject recedes. The more is learned about how the brain works, the less this knowledge can be tied up with how it feels to have a brain - to feel oneself living and thinking. Science has always discussed the world from the outside. Can it ever encompass how the world feels from the inside?

There is no doubt that the process of learning from the outside is getting better and better. You can use a brain scanner to look inside someone's head while he learns a poem or recites one back. You can see which parts of the brain are working, and how hard. But even if these scanners could track each signal in each of the brain's billions of cells, they would still not explain why that activity should feel the way it does. People may, someday, be able to find which pattern of neurons firing makes up a thought. But what does that tell you about consciousness, about the feeling of the thought?

Chalmers calls the discovery of the patterns in the brain that correspond to thinking the easy problem. By calling it easy, he does not mean to deny that it is also fiendishly difficult. It would mark the culmination of centuries of work in neurology, in bio-chemistry, in experimental psychology. Answer that easy problem and you have the mechanics of memory, learning, sensation laid out before you.

But easy is a relative term. To Chalmers, such questions are easy because people have some idea of how, in principle, they might be answered. This contrasts with the question of why consciousness arises from a physical process. That "why" is what Chalmers calls the hard problem. As he outlines its hardness, he speaks in a light, hurried voice. His words come in quick flurries, like snow.

Watching Chalmers talk, you realise you can see less and less; you're stuck deeper and deeper in a world of improbable contradictions. "When it comes to the hard problem, my feeling is that you need something that goes beyond physical theory, because everything in physical theory is compatible with the absence of consciousness; my feeling is that you have to take consciousness as axiomatic, like time and space. "The problem comes with constructing a theory that will link them all together. You want to get down to something that's deep and fundamental - a set of laws that's simple enough you can write them on the front of a t-shirt." The scientific study of consciousness, Chalmers says, "is like physics before Isaac Newton came along. No one knows what is really happening."

A new way of looking at the world, expressible in some practical, useful axioms. A call for a new Newton. No wonder that the brilliant and ambitious flocked to the Tucson conference, the second of its ilk and the foremost gathering of its type in the world, to try their mettle.

All you Zombies

Go back a few decades and you will be hard put to find any inkling of a science of consciousness. Behaviourists treated animals and humans as black boxes; neurologists were still puzzled by the synapse. Even the study of perception was academically suspect. But fashions change, and new tools become available, and those two things shape all science. When Francis Crick, one of the discoverers of the structure of DNA, announced in the '80s that he was going to spend the rest of his life studying consciousness, he was responding to, and amplifying, a growing sense that consciousness would be the next big thing.

The idea that some animals are conscious is now almost as common among the scientists who long rejected it as it always has been with everyone else. "Most people have no difficulty in seeing consciousness in cats or dogs, though once you get down to flies it's more difficult." says Chalmers. "But there may be some very simple form of consciousness, experience, without much in the way of thought or activity - something about consciousness that is pre-intellectual." Then, as an afterthought: "Some of our machines may have that now."

With this remark he leaps across one of the great chasms that divide the field. Nothing has done more to sharpen the issues involved in consciousness research than the promise, or the spectre, of artificial intelligence (AI). As Chalmers, who studied for two years under AI guru Douglas Hofstadter, puts it, "The deep question is why any physical system, whether machine or animal, is associated with consciousness. But brains did it, so why shouldn't machines, too?" In other words, when the hard question is solved, artificial intelligence in its truest form is thrown in as an added bonus. The problem of making a computer that knows itself to be capable of thought, one with an inner life, joins the ranks of the easy questions.

Some people think it already belongs there. Dan Dennett is a large, long-legged man, with a great rounded skull like an ostrich egg, and a beard like God (in whom he does not believe). Wherever he stepped in the corridors or anterooms of the Tucson conference he became the immediate focus of a ragged, admiring ellipse of students and disputants upon whom he beamed with sharp benevolence. He is the inventor of one of the classical thought experiments of AI: the replacement of a grey porridgy brain - yours, for the sake of argument - with a shiny new one made of something utterly different.

Cell by cell, bit by bit, your neurons would be turned to silicon. But each piece of silicon would have exactly the same connections and behaviour as the cell it replaced. The neighbouring cells would get exactly the same responses as they would if the delicate electric and chemical feelers of their dendrites were still brushing against other cells, rather than just being plugged into the wiring. Would you notice? Would you care? And if you did, why?

No one has yet found anything magic about the way the neurons in the brain signal to one another. It is immensely complicated, but it's only chemicals and electricity. It can be measured and mimicked. In the outer suburbs of the brain it already has been. There is a treatment for deafness which replaces a defective nerve with circuitry. Treatments for blindness that would replace the eye with a television camera are already thinkable. Why not replace a whole brain, and thus show that silicon can support consciousness as well as carbon does?

This sort of reasoning by equivalence has a long pedigree in AI. If the discipline has a founding principle, it is that of the Turing test: the idea that a computer which could converse as convincingly as a human could be said to have artificially captured the human intellect. By taking a similar line of thought rather further, Dennett concludes that the hard problem, as defined by Chalmers, is no more than a mirage. When all the "easy" problems about how the brain processes information have been solved, we will discover that the hard problem has simply disappeared.

Indeed, one of the things that makes the hard problem so difficult at the moment is that lots of people can't see that it is a problem at all. People like Dennett - believers in "Strong AI", also known, for obvious reasons, as zombies - are sure that consciousness will turn out to be no more than the sum of meaningless mathematical manipulations of information.

Dennett defends these views with enormous clarity, force and charm. Nonetheless, he seems to be drawing back in recent years from the particularly strong position with which he made his name. Though he still says that, in principle, any chunk of silicon (or anything else) that can perform the same functions as a brain would by definition be conscious, he now admits that some of these functions seem to be much more specialised now than they did even ten years ago. The more we know about the human brain, the less it seems likely that we can ever reproduce anything like it artificially. Among those who still believe it can be done, the methods which were fashionable when Dennett first started work are long forgotten. Danny Hillis, for example, the founder of Thinking Machines, one of the first successful parallel processing companies, and now vice-president of R&D for Disney, believes that the only way to get a machine complicated enough to have a possibility of consciousness is to breed it:

"Imagine something like the Internet, multiplied times 100, and imagine all the machines on it exchanging programs, and imagine using those programs to design a system which would run not on one machine but on the whole network - then I think you have the image of something that might be complicated enough to be conscious," Hillis told the conference. "Once you do that, it becomes easier to accept the idea of something like a conscious machine. I think people who have a strong intuition that machines can't be conscious have that feeling not because they overestimate the wonders of consciousness but because they underestimate the powers of machinery. A lot of arguments against machines thinking are made from exaggeration and distortion."

Hillis's foremost target is Sir Roger Penrose, a brilliant Oxford mathematician who is partly responsible for the conference's being in Tucson. Penrose believes that consciousness is fundamentally not like the operations of a computer, and cannot be recreated by means of those operations. Rather, he believes it has to be tied to a new set of natural laws, laws that combine the apparent randomness of the quantum world with Einstein's general theory of relativity. And laws that would explain the fundamental sympathy for mathematical form that Penrose, among other things a brilliant geometer, sees in the human mind.

After putting this set of ideas into a demanding, beautifully written and, say the strong AI types, deeply wrong-headed book - The Emperor's New Mind - Penrose came into contact with another non-traditional consciousness researcher, Stuart Hameroff. Hameroff's fascination with consciousness comes from his professional need to extinguish it; he is an anaesthesiologist at the University of Arizona's hospital. And he's fascinated by the fact that consciousness, whatever it may be, can be removed and restored with a subtle mix of simple gasses.

Hameroff came up with the idea that microtubules, tiny pipes that stiffen the insides of cells, were crucial to the whole story. Penrose got interested and the two began to collaborate. The first Tucson conference, two years ago, was one of the fruits of that collaboration.

The proteins that form the walls of the microtubules can flip between two different shapes. They can also exist, for an instant or two, in a state of quantum superposition, like Schroedinger's cat; it is the flickering through that third state that Penrose and Hameroff now believe constitutes conscious events. Their theory is resonant, especially when Hameroff explains that psychedelic drugs promote superposition in the microtubules, or when he adds that the consciousness event is a blister in space-time, and so must be quickly reduced to normal physics. "If not reduced, a blister in space-time would shear off into multiple universes - and we hate it when that happens." If ambition were all, the Penrose-Hameroff theory and its implication that consciousness is built into the very fabric of nature would score highly. However, most people reckon its chances of success are small enough to slip between a couple of electron shells. And even if the microtubules function as they are supposed to in this theory, the difficulty pointed out by Patricia Churchland, a philosopher, remains: that there is no clear account of how these quantum events should cause consciousness. The hard problem remains untouched.

Cogito Ergo Something

You don't have to believe Penrose and Hameroff to disagree with the strong AI view that a machine could be conscious. When Hillis put the question to his audience, the majority professed itself agnostic: the 60% expressing a view was pretty evenly split between pros and antis. But it has to be said that the antis had all the best tunes. Literally so, in Jaron Lanier's case. He opened the proceedings with a self-composed piano piece that started off strange and brilliant, and as it went on became progressively less strange. This formed a counterpoint to his reasoning, which was unflaggingly brilliant but grew stranger and stranger as he argued that talk of machine consciousness veers between the futile and the really dangerous.

As well as being a musician, Lanier is a code god: he invented the term "virtual reality", and then built the gadgetry to make it real. He has a towering physical presence, being well over six feet tall and about four feet round the waist, with a great shock of maize-coloured dreadlocks reaching almost to his waist and a beard that gives his face a perfectly triangular jaw like an Egyptian painting's. His eyes are a bright, pale blue.

He's a genius, of course, and knows it in the same sort of way he knows he's tall. Talking to him, I felt as if I was talking to the spirit of the prairies: a restless, inexhaustible wind running on forever. America, he says, loves frontiers, and always peoples them with abstractions, like freedom, or computing. But it is all a mistake. The abstractions do not exist; the frontiers are only a trick of perspective.

First he disposes of the Turing test. Only a fucked-up gay Englishman being tortured with hormone injections could possibly have supposed that consciousness was some kind of social exam you had to pass, he says. There are two logically possible ways to for a machine to pass the Turing test. Either it gets smarter, or we get dumber. He is in no doubt that the likelier reason would be that we have got stupider and adapted our reasoning to the machine's expectations.

Then he demolishes the idea of computation. It is an arbitrary process, he says. It is not built into the structure of the universe, but only into the way we understand it. Anything can be measured in numbers, and any set of numbers can constitute a program for some thinkable computer. Even a meteor shower could be read as numbers, and so be read as a program on some computer somewhere. Would we therefore say that the meteor shower was computing, or conscious?

Then he has a go at the physical reality of the universe. We were sitting outside as he said this, facing each other across a concrete table which was one of the most solid objects you could hope to find. But, he insisted, on the level of quantum electrodynamics (QED), the most precisely confirmed of all physical theories, it turns out to be an illusion.

"I hate to tell you this, but QED does not acknowledge the existence of gross objects. From the point of view of physics, there are particles in positions, but no gross objects; the particles don't constitute a table." He gestures at the stone table where we sit in the dusty evening, scents of sage-brush and diesel borne round us on a gentle wind. "Gross objects aren't needed by physics. They're actually extraneous. Ask a neutrino."

I feel as if the prairie wind had whisked me off to Oz. "This is genuinely hard stuff," he says. "Hard to think about and hard to talk about." Then he returns to the more mundane objections to a mechanistic view of consciousness. It narrows our vision of the world. The danger of believing that consciousness is computation is not that it causes us to misunderstand machines, but that it makes us misunderstand our own nature.

Lanier places the hopes that some people have for consciousness research firmly in the American tradition of faith in some apocalyptic fix to all life's problems. Indeed, the programmatic atheism of the main conference was thrown into high relief by the simple faith of some of the attendees that technology could perform all the functions once expected of God. This irrational longing gives the field of consciousness research a lot of its excitement, yet it is only the very naive who will admit to it.

I had supper one night with a man who had made enough money in mainframes never to need to work again. He could travel the world gratifying his curiosity - and the miracle he expected from consciousness research was immortality. Humans, he felt, should be migrated onto better hardware when the old stuff wears out. This is the natural development of the classic Dennett thought experiment where you replace each part of my brain by a bit of silicon, thought by byte, until I wake up one day to find myself running on entirely different hardware, as if I, a mere human, were as clever and well designed as System 7.5.

Love and DNA

That was far from the strangest thing I heard, even if it was one of the more poignant. There were over 500 papers delivered at Tucson, and many of them were deliciously cranky. One Californian paper displayed in a poster session, where anyone could pin their ideas up on a notice-board, claimed that "Pioneering scientific research has demonstrated the experience of love to facilitate coherent harmonic patterns in the EKG, to improve both immune function and hormonal regulation, and enable focused attention to modify the conformation of human DNA. Love has also been shown to be causal in modulating the structure of water to provide a matrix for information storage and transformation."

If love doesn't suit you as an explanation, try evolution. Physiologists and philosophers argue about what consciousness is, but there is also a raft of questions about why it is, what function it might fulfil. Here the analytical tools come easily to hand. If the question is how did a biological phenomenon come about, the answer has to include evolution. Consciousness needs to pay its evolutionary way.

It might seem obvious that an animal that deals with the world in an intelligent, coordinated fashion has an advantage. But that does not translate into a need for consciousness. Most of the intelligent, coordinated decisions crucial for an animal's survival are made far faster than the conscious mind can process things. Jeffrey Gray of the Institute of Psychiatry in Denmark Hill, South London, serves up an example from Centre Court. In the time that it takes a tennis player at Wimbledon to become consciously aware that his opponent's serve has crossed the net towards him, he must already have struck his own return ball if he is to stay in the game. Even quite high-level trains of action, like commuting by car, or talking over breakfast with your spouse, can be performed without any conscious input. So why should consciousness be selected by evolution?

Gray's answer is that consciousness works rather like a newspaper: it does not report everything that happens, only the things that are unexpected and unpredictable. In a loop that takes about half a second to complete, he says, he believes "the contents of consciousness perform a monitoring function to check whether the actual and predicted states of the perceptual world match or not." And this continuous activity is where we live.

The other function widely ascribed to consciousness is that of a global workspace. Bernard Baars, a sturdy, bearded Californian, tanned like a hazelnut, claims that consciousness takes place in an area of the brain accessible to all the separate sub-processes of our mind, any one of which may suddenly come to dominate the public arena. The technical details, as with Gray's theory, are hugely complicated. But all these theories, along with ten or twenty others, are still attempts at the easy problems. The fact that consciousness is being approached as a field of scientific study will inevitably change the way that people think about themselves, irrespective of the hard scientific results. The science provides new metaphors. They take on a life of their own. The consequences of people thinking of themselves as soft machines may matter more, in some ways, than the question of whether or not they are soft machines. Take free will, as Colin Blakemore discussed in Tucson. Blakemore, an English perceptual physiologist whose experiments on the eyesight of new-born kittens have made him something of a cause celebre among animal-rights campaigners, argues that free will must be an illusion. What happens in the mind is conditioned by what happens in the brain; what happens in the brain is determined by physical laws. Therefore, if you know the state of the brain at any one moment, you can predict its next state, and its next, and so on. Free will is then exposed as an illusion. If only we knew enough, we could see that all our states of mind are equally conditioned by the brain's physical, law-bound realities.

This position looks like science. If it doesn't feel like the truth, then consider a fact revealed by related psychological research: the brain is great at self deception. We are constantly aware of what is going on in our heads, and much of what we are aware of is false. The process of turning the blooming, buzzing confusion of the world into a coherent story involves a tremendous amount of suppression, distortion and illusion. For instance, there is a well-known experiment in which the subject is wired up to a machine that monitors the brain, given a button to push and told to push it whenever he feels like it - and to say whenever he feels the urge to push the button. That's all. It would seem an untrammelled exercise of free will. Yet the experimenters, watching, know when the subject is going to push the button before he himself is aware that he is about to, or wants to; there is a particular burst of electrical activity in his brain half a second before the subject himself realises that he wants to press the button.

Because of this and similar effects, Blakemore proposes that the distinction between voluntary and involuntary acts is illusory - no more than "folk psychology", to use the jargon sneer. This is a view with potentially enormous consequences. David Hodgson, an Australian judge who has taken a keen interest in the problems of consciousness research, points out that in his world, the distinction between voluntary and involuntary acts is essential.

"I think this raises a really hard question for people who want to do away with folk psychology of voluntary action, belief and intention, and replace it with the concepts of neuroscience, which have a reputable place in the scientific world of cause and effect," Hodgson told the conference. "What do they see as replacing the consent of the woman as the crucial factor in determining whether an act of sexual intercourse is lawful or non-lawful? And if they can't give a non-evasive answer to that question, then I think they should reconsider their programme."

Perhaps the answer could be supplied by the Secret Policeman's Brain Scanner. In the best traditions of philosophy and computer science, this is a device the efficiency of which is unimpaired by the fact that it hasn't actually been built; I invented it half way through the conference. But it is already a very useful tool for examining consciousness researchers. It consists of a small, portable headset full of clever little scanning widgets, and a screen that the experimenter can study. It gives a complete readout of the state of the brain at any one moment, just like a hardware debugger can in silicon. In short, on a personal level, it answers all Chalmer's easy questions. To some, like Dennett and, it turns out, Blakemore, the scanner thus answers everything, providing the policeman with better knowledge of the subject than the subject himself has access to. David Chalmers sees it as less clear cut, showing "the broad outline of what's going on and the likely behavioural dispositions."

Neuropsychologists, though, turned out to be less sure that the secret policeman's brain scanner would be useful. There are two main reasons for the scepticism. The first is that the functional unit of the brain seems not to be the neuron but, instead, networks of neurons, and that these networks are constantly shifting coalitions, with no more stability or definition than a cloud of fireflies. What is more, each neuron can be a member of several different firefly clouds at once, playing different roles in each. Quite precise regions of the brain can be mapped down as essential to certain functions - usually after a small localised stroke has stopped the functioning. But memories and ideas appear to be distributed all over the cortex, linked only by associations that are different in each individual brain.

Douglas Watt, a neuropsychologist from Quincey, Massachusetts, says that the whole project of the secret policeman's scanner is doomed because our brains are just too complex, too individual. There would be no useful way to interpret the data. Each brain is unique; each has its own genes, its own history, its own consciousness. Even the simplest networks, such as the one in our speech area that recognises a single syllable, may be located a couple of million neurons away in your brain from where it is in mine. Also, the exact location of this syllable switch - not only which neurons are involved but also which other neurons they talk to - will shift over time, particularly if I learn other languages and distinguish more carefully between phonemes. This means that, to be able to use the secret policeman's scanner at a given time, you might have to have been running it throughout the scannee's life. The meaning of a snapshot image would have to be derived from its history; consciousness can only be under- stood as a narrative.

One does not need a human brain to establish this; the classic experiment uses rats. They were taught to recognise a particular smell, and a wonderfully subtle arrangement of electrodes detected the exact area of the olfactory bulb in their brains which twinkled into life when they did so. So far so good for the secret policeman's scanner; we know that when these particular neurons go off, the rat is conscious of a particular smell. But the next stage lifted the experiment into the realms of genius.

The brain-mapped rats were trained to associate this smell with some other stimulus - a flashing light, or a buzzer, or another of the usual forms of entertainment laid on for laboratory rats. After that training, the rats were once more exposed to the same smell, without the additional stimulus - and this time an entirely different group of neurons fired off. Even to a rat, it seems, there is no such thing as a pure sensation. Everything is stored, and experienced, in relation to other sensations and to emotions. The story of consciousness is one of feelings.

"A recent project at Yale found there was nearly no such thing as an emotionally neutral word." says Watt. "Orienting information supplied by low-level emotion is essential for working memory, and working memory is essential for consciousness. We have two cultural icons, Data and Spock, who are clearly sentient beings, and show better purpose than most of us, and yet claim to have no emotion. I think this is just impossible and these characters are just contradictions in terms."

One-handed clapping

If consciousness is built on feeling, then feelings about feelings may prove particularly fruitful in its study. One way to get access to them, as to so many things, is to look at what happens when things are disrupted; in the brain, this usually means a stroke. Small strokes can explode in the brain like smart bombs, taking out one function, but no more than that. A blood vessel bursting here will rob you of the power of speech; another in a slightly different place will leave you able to speak and hear, but not to understand, language; another will produce "blindsight", a condition in which the patient can actually see things but does not know that he can, and so considers himself blind.

Vilayanur Ramachandran, of the University of California, San Diego, told the Tucson conference about a particularly odd class of stroke effects. His patients are women who have not only been paralysed down one side by a stroke, but have also been robbed by the calamity of the knowledge that this has happened to them.

If someone "normally" paralysed is asked to pick up a tray of drinks, he will use his one good hand to pick it up from the middle. If one of Dr Ramachandran's patients is asked to do so, she will grasp one side of the tray as if her left hand was grasping the other, and lift confidently. "Oh, how clumsy I am" she will exclaim when she spills the tray's contents everywhere. His patients simply cannot see that one of their hands is not taking part in the process. They are lucid in all other respects: they are able to tell him when and where they had a stroke, but simply unable to admit even to themselves that this stroke has paralysed them.

He describes one patient who was convinced that her left hand, which could not move at all, was touching his nose. "I couldn't resist the temptation ... I said, 'Mrs B: can you clap?' "She said, 'Of course I can clap.' "I said, 'Clap!' "She went" (he moves his right hand in a lurching motion through the air to the point where it would have met the left hand). "This has profound philosophical implications," he continues, as laughter ripples round the conference hall, "because it answers the age-old Zen master's riddle - 'what is the sound of one hand clapping?' You need a damaged brain to answer this question." Dr Ramachandran follows his strange findings into unpopular waters. For decades now, nothing could have been more unfashionable in serious academic psychology than Freud. Yet what Ramachandran sees reminds him inescapably of Freudian theories of denial, repression and other defence mechanisms. He believes that the pattern of denial his stroke patients exhibit points to the mind's continuous struggle to produce a coherent picture of the worlds, and to prefer coherence to accuracy - a very Freudian notion.

In Ramachandran's view, the struggle is between the brain's hemispheres. When isolated facts are reported which might upset the mind's currently held view of the world, the reaction of the left hemisphere is to ignore them. Most of the time, this will be the correct response; sensory systems are not perfect. But the right hemisphere carries out the occasional reality check, just to be sure, and if it thinks something's awry, it gets together with the left hemisphere and, quite literally, changes the mind. In stroke patients who cannot recognise their condition this mechanism stops working. The right hemisphere messages never get through and then, he says, "There is no limit to the delusions that the left hemisphere will engage in."

The condition is not permanent. Though it will reassert itself, it can be dissipated for a few moments by squirting ice-cold water into the ear on the unparalysed side. The effect is easy to miss, because if you squirt cold water into the wrong ear, as Dr Ramachandran did the first time he tried it, you are left with a patient who is confused, and angry that anyone should have squirted cold water without warning or reason into her ear, but still unaware that she is paralysed. But if the water is squirted into the ear of the damaged hemisphere the patient experiences a period of confusion and then about ten minutes when she knows perfectly well that she has been paralysed - cannot imagine not knowing this, in fact. Six hours later, she will have forgotten the whole episode, and once more be convinced that everything is working properly.

Making a Soul

I know how they feel. The Tucson conference was a series of strange perceptions and iced water in the ear, of epiphanies that slide away, never quite recorded.

On that night in the Empire Bar, after Bruce Mangan and David Chalmers had finished their sparring, Patrick Wilken, one of the spectators, turned to me. He is executive editor of Psyche, the most severe of the three journals of consciousness research to have appeared in the last five years. He also moderates the Psyche-D mailing list, one of the few places on the Internet where real work gets done in public. "You realise what all this is about," he said. "We're trying to invent a new soul."

Over the days that statement grew on me. Instead of deep sleep, I could only dream. Instead of dreaming, I would wake up. As I lay in my hotel room in the thin desert dawn, I couldn't see the sky over the mountains, blue and purple like the flank of a rainbow trout. I could only see little pack trains of cholinergic and aminergic neurons trudging up and down my brain, from its roots to its thoughts. One morning I woke up like this but with a jingle in my head, too: "There's a fire that calls me miracle; that will not cause me chemical" - the phrase repeated over and over, up and down a scale like the trudging neurons. I was eavesdropping on my brain trying to make sense of itself. Whoever I was.

This exalted, jangling state went on until I began my journey home. Outside Dallas, a razor-cut horizon bled fresh dawn into a new hotel room. And after a night of sleep I woke to find that Patrick Wilken's new soul finally made sense. It was not, as I had thought, the soul of a machine we were trying to make; it was our own souls we were inventing here. I fell back into a sleep rich with dreams.

Andrew Brown ( [email protected]) is a brain in a vat. He believes himself to be the religious affairs correspondent for The Independent.

This article was originally published in August 1996 UK edition of Wired Magazine.

Copyright 1996 Wired Ventures - [email protected]
Wired UK, Shand House, 14-20 Shand Street, London SE1 2ES, UK
vox: +44 (0) 171 775 3446; fax: +44 (0) 171 775 3401