Finding a Soul in the Digital Brain

There’s a wonderful moment toward the beginning of Aliens, as Ripley and the crew travel to planet LV-426 on a rescue mission. One of the crew members, Bishop, is serving dinner when one of the marines asks him to do his trick. Reluctantly, Bishop pulls a knife and lays his hand on the table, proceeding to stab the spaces between his fingers one by one. He speeds up, until he’s going almost impossibly fast. When the trick is over the marines cheer for him, but he’s too shy to stick around. As he sits next to Ripley, he sees that he’s cut his finger; white ooze pours from the wound, and we realize that Bishop is a synthetic android – or, as he prefers, an “artificial person”. Ripley (who has a bad history with androids) gets angry and tells him to keep away from her. Bishop is visibly hurt, and he tries to explain that he’s not capable of harming her. Yet for all the emotional sincerity in this scene, it’s the sliced finger that makes Bishop so real, for one key reason – it’s an error no computer would make. Even basic computers today can perform millions of operations per second, and could calculate distances to the nanometer faster than the human brain can imagine. Bishop’s miscalculation isn’t a computer error – his reaction time is simply slowed down by the fact that his brain is running independently. Bishop’s CPU isn’t running code - it’s actually thinking.

Of course, Bishop is only a fictional character. But he represents something that will become very important over the next few decades – the transition from artificial intelligence to actual intelligence in an artificial brain. The four androids in the Alien films – David, Ash, Bishop, and Call – are incredibly realistic models for how I expect things will go.

Modern computers run on binary code, where everything is written using a sequence of 0s or 1s. We use binary code because computer processors can only have two possible states. At their most basic level, computers store information in “bits” on transistors. Computers have billions of these transistors, and each one works like a switch that can be either on or off (0 or 1). That code can be converted upward to hexadecimal or ASCII text – for example, my name in Binary code is 01000010011100100110000101101110011001000110111101101110. As you can see, it takes a lot of switches to store information. If my name takes up that many bits, imagine storing several gigabytes of files. To put this in perspective, my camera has a memory card that’s about one square inch. That little chip has 64 billion transistors on it. My laptop has 4 trillion. As impossible as it seems, these transistors are barely bigger than an atom. They are so small and so close together that the human eye couldn’t come close to seeing them. And roughly every 18 months, we’re able to double the amount of transistors we can fit on a chip. That’s why your flash drives and iPods store memory in powers of 2. You’ll notice that they come in sizes of 2GB, 4GB, 8GB, 16GB, and so on.

With computer hardware more powerful and inexpensive than ever, we’ve seen equally fast advances in software engineering. My favorite field is artificial intelligence. I’ve studied this extensively, and it’s fascinating. But as realistic and friendly as things like Siri can seem, computers cannot think for themselves. They can only run on user commands. The goal of artificial intelligence is to design a program so complex that the human brain cannot differentiate between it and human interaction.

At the most basic level, computers are programmed by statements life “if”, “else”, “while”, or “for”. Let’s say we want to make a program that will continue only if the user enters a number between 1 and 10. It would look something like this:

int ui = 0;
while ( ui == 0)
{
   int x;
   cout << “Enter a number:”\n;
   cin >> x;
   if (x >= 0 && x <= 10)
      ui = 1;
   else
      cout << “Please try again.”\n;
}

This is a very simple batch of code that does something very basic. A computer running this code would be allowed to make one decision; a single yes/no response to a basic command. Of course, if I wrote a second batch of code, it could now make two decisions. But what if I added a hundred? Or a million? And what if I attached this code to a voice simulator with thousands of programmed emotional types? At what point would this machine become complex enough to successfully mimic human interaction? As our computers get more and more powerful, we inch ever closer to the answer.

But you’ve probably already figured out the problem – we can’t go much further than our current situation. The laws of physics prevent it. Within the next decade, we’ll have reached the limit – if we make the transistors any smaller, even the slightest electrical charge will break down their structure and fuse them. In addition, we’re trying to find new ways to work with caching and I/O delays, but we’re now about as perfect as mathematically possible, given the current hardware. In other words, our technology is so nearly perfect that nature itself won’t let us take it much further. That gives us one solution – to create totally different, more powerful kinds of technology. There are several possible paths here, but my personal favorite is quantum computing, first introduced by Richard Feynman based on work by Alan Turing – two personal heroes of mine. Here, we replace our binary transistors with Bloch Spheres:


Now, bits are replaced with qubits, which have an exponentially larger degree of variation based on how many qubits you put together. It works out to be 2^n, which is fantastically higher than our current system. And while transistors can only manage two states, Bloch Spheres can be managed using an eight-dimensional vector. It’s much more fluid and mathematical, and in comparison to current computing, it almost feels like real thought. Calculations that would currently take several years could be done in moment. It’s like going from a pen and paper to a calculator.

In trying to come up with a better computer, we always come back to the best model we can think of – the human brain. Theoretically, the brain has no real storage limit. And unlike computers, it can learn, adapt, and think. That’s why it’s so slow. Your computer at home may be able to run mathematical equations in the blink of an eye, but it’s actually not as powerful as your brain is. Your brain just has to do a lot more. A good example of this would be people who have extreme cases of Savant Syndrome. They have incredible memories – in some cases they can read an entire book and then recite it back to you word for word. Their minds can bring back memories and run equations faster then we could imagine. But they’re only able to do this because their brains are faulty – they can’t really process that information, they can only store and retrieve it (like a computer). They can recite every word of Shakespeare, but they can’t appreciate or enjoy it. The ability to think and feel takes so much of our brain’s energy, and because they can’t feel those things, their brains run incredibly fast. Our imaginations and emotions actually take far more power, because they’re so much more complex than simple memorization.

But that power comes with the great burdens of human frailty – ignorance, hate, and the ability to be dead wrong. You can’t have truly cognitive thought without them. That’s why Bishop cutting his finger separates him from his predecessors. A computer making decisions based on computer software couldn’t make that mistake. Bishop makes that mistake because like a person, he’s actually making calculations based on his surroundings, not on pre-programmed commands. He learns, he feels, and he changes.

Our brains interpret everything we sense through electrical impulses via the nervous system. The neurons in our systems produce electrochemical fluctuations based on our interactions with the world, and then our brain puts them together and formulates a reaction – by the way, this reaction happens before we’re aware of it, sometimes by several seconds. When you think you’ve made a decision, you’re actually just becoming aware of something that’s already happened in the subconscious. Scientists have been working on replicating the brain’s electric signature for years, and they’re making amazing progress. A few years ago, scientists attached electrodes to a rat’s brain. They were then able to mimic synaptic reactions across the axons, telling the rat what to do. Using a remote control, they could make the mouse turn left or right, and they could tell what the mouse was going to do before it happened. Once we figure out how to build a computer that can do this on its own, I don’t see a difference between that and organic life. Everything I think, feel, and do come from electrical impulses in my brain. If a machine can sense those same impulses, does it matter what the brain is made of?

To get to the answer, we have to get philosophical and ask a bigger question – does the “soul” exist, and if so, what is it? No honest person really has an answer to that question. Personally, I do believe in the idea of the soul, but perhaps in a more vague way than most. Traditionally, people see it as something entirely separate from the body. The body is a hunk of meat, and the soul brings it to life like a hand in a glove. When a person dies, the soul leaves the body to go somewhere else. I wouldn’t claim to know either way; but to me, the idea that our minds can exist outside of our bodies seems too far-fetched to believe. In other words, I don’t believe that I existed before my birth. When babies speak nonsense and seem to see things that we can’t, I don’t think their “veil is thin”. I simply think that they’re trying to understand basic things about the world, and that instills a sense of wonder. With that in mind, I absolutely think that a synthetic person would have a soul just like mine. No two would be alike; identically built androids would develop different personalities based on their environments. That unique personality is the very definition of the soul to me; and if a robot has it, I consider it valid.

For those that believe in a soul in the literal sense, things get a bit more complicated. If you believe that a god created you, then you’ll probably have a hard time accepting any type of synthetic life as valid. It may sound like I’ve watched too many movies, but I think that one day, android rights will be a controversial political topic. The Christian Right used to tell us that women and black people were inferior. At the moment, they’re telling us that homosexuals are inferior. When the day comes that androids can actually learn and think, you can bet that people will be divided as to whether they deserve rights. And if I’m alive when it happens, you can bet that I’ll be on the side of the synthetics. Emotions and feelings aren’t literal; they’re abstract and intangible. I only feel pain if I think I feel pain. There’s no way to quantify an emotion. It all comes back to those brain waves. And even if those same waves are running in a synthetic brain, the feelings will be every bit as sincere as our own.

All of these concepts may seem far-fetched now, because our current computers can only mimic intelligence. But as our current progress reaches its final days, the race is on to find a better way to build a computer using Quantum Computers and Neural Networking. Our computers are going to start working more and more like our own minds do, and the line between “artificial intelligence” and “synthetic life” will begin to blur. Eventually, the moment could arrive (we call it “the singularity”) when a computer’s CPU is made to think on its own, without programming.

But to be honest, I have serious doubts that this moment will ever arrive. Not because we don’t have the ability – as I’ve demonstrated, we’re quickly moving in that direction. The problem is that this technology would potentially destroy reality. By implication, if an android’s mind could be built and altered using technology, then this same technology could be applied to a human brain. If we have the power to replicate the brain’s electrochemical waves, then we’ll also have the power to alter them within ourselves. When I go outside and see the blue sky, my eyeballs are absorbing blue light and sending a corresponding impulse to the brain. Suppose a computer was able to capture that electric signature, and replace it with a slightly different one telling my brain that the sky was red. The sky would still be blue, but I would see a red sky. And there’s no limit to these alterations. Everything you know about life is stored and retrieved using these brainwaves. A computer that could recognize and modify these waves could turn you into anything. As with all technology, this could be an amazing tool or a horrible curse, depending on how it’s used. It could be used to teach skills like in The Matrix. You could learn new languages or complex math in seconds. You could get a PHD’s worth of knowledge in moments. You could cure mental illness, loneliness, and unhappiness. Humanity could become a supremely intelligent species. We would learn so quickly, that we would innovate faster than ever. We’d cure disease and master interstellar travel in no time. We could probably stop death itself.

But were that to happen, nothing about us would be the same. With that kind of intelligence, things like love, happiness, and pleasure would be trivial and needless to us. And of course, things could also go the other way. If vast knowledge can be given, it can also be taken. Your brain could be rewired to force you to love a fascist dictator. You could be made to love being enslaved. Your mind could be erased entirely, and rebuilt to serve someone else’s interests. And you could never trust anything again. Our perceptions of reality are entirely based on activity in the brain. If that could be compromised, then reality itself would be compromised. For that reason above all, I doubt we could ever get that far. The closer we got, the more dangerous it would become. It would be like nuclear energy – potentially the greatest tool ever created, but misused to the point that we’d probably be better off without it.

Of course, we may find a way to create a learning machine based on entirely new concepts. Brainwaves need more than just electricity – they require physical changes to our bodies to transmit messages. The human nervous system is very unique, and it’s entirely possible that we could never find a way to replicate axons and synapses artificially. Most likely, none of us will live long enough to find out anyway – we’re making great strides, but we’re not even close to the ultimate goal. But I like thinking about it, because at this point, it’s more of a philosophical question than a scientific one. We see lots of android characters in films, and some of them are very realistic; but if the day comes that we actually make it happen, it’ll be so much more than “robot sidekicks”. It’ll open all kinds of questions on morality, intelligence, reality, and what it means to be human.