Another ask-an-expert piece (see also my Hobbit-based article about dragons). This is an interview with Dr Sean Holden of Cambridge University, about the current state of artificial intelligence, and whether the events of Wally Pfister’s Transcendence are at all feasible. The subject is also relevant to Spike Jonze’s Her, which is a better film. Originally published on the Empire website.
Wally Pfister’s Transcendence imagines a world where a human brain can be uploaded into a machine. We wondered just how far-fetched the film’s science fiction really is, and asked Cambridge University Computer Laboratory lecturer in Computer Science (and artificial intelligent, or AI, expert) Dr Sean Holden to talk us through the current state of the art…
Did you have any initial thoughts from the trailer?
Yeah, it looks like a good film. I’ll probably go and see it. Scientifically speaking it’s highly problematic though. The reality is less entertaining.
Let’s start with where AI stands right now: what’s the most impressive, cutting edge AI technology that currently exists or is being developed?
The most high profile recently is probably IBM Watson. IBM have developed this system for playing Jeopardy, the American game show. I think Watson is now among the best three players in the world, and that is a pretty stunning achievement. It plays alongside human players, and the questions aren’t typed into it: they’re spoken. The level of complexity in terms of working out what’s been asked and then solving the problem part of the game and coming up with a solution [players have to deduce the question from the answer] is amazing. That’s been a really major forward step in AI.
Some other really nice applications have been some of the work on autonomous vehicles. That’s been going on for a while, and some of that’s pretty extraordinary. There was a big challenge to get an autonomous vehicle to drive a very long course in the desert, and now there’s work ongoing to get an autonomous vehicle to be able to cope with an urban environment. That’s getting towards being a workable system.
Those are good examples that correspond to an intuitive understanding of what AI should be. But the actual applications are everywhere: they’re just a lot more hidden. Companies targeting you with advertising as you search the web are using underlying AI techniques. There’s a lot of interest in things like analysis of Twitter feeds. Companies would like to be able to harvest tweets and work out whether they are positive or negative with regard to their products, for example.
But none of those systems are anything like what we’d call sentient, right?
No (laughs). There’s an extreme view that’s not really widely held in AI circles, called Strong AI, where people argue than any system that’s doing any kind of decision-making is in some sense self-aware. One of the main proponents of that idea considers a thermostat to be an AI, because it has exactly three thoughts: it’s too hot, it’s too cold and it’s just right. That’s not really a view that’s held by a lot of people…
Both Transcendence and Her hinge on AIs that learn exponentially. Is that feasible?
Machine learning has been around for a long time, and it’s a common and entirely workable technology. You will almost certainly be interacting with machine learning systems on a day-to-day basis. So yes, systems do learn, but the idea that they’re learning to learn better is much more tricky. That meta level of learning is far less common, and there’s nothing that can bootstrap itself at that sort of exponential rate in order to outstrip us. Some researchers have claimed it might actually be possible in the short term, but I think it’s extremely unlikely to happen in our lifetime. It’s almost impossible to conceive of how complex a brain is. Just arguing that throwing some more computer power at it is suddenly going to create super-AIs that take over the world seems to me to underestimate the complexity of the task.
What would the storage be like for a system that was that intelligent?
Let me give you an example of how complex a brain is. There’s a project at the moment where some guys are trying to image part of a mouse brain. They’re trying to construct an image stored in computer memory at a level of detail that’s good enough to actually work out where all the cells are, how they’re connected together, where the supporting material is, and so on. That’s for a mouse brain. They estimate that if they try and scale that up to a human brain, they would need half the storage on the planet. And that’s just to store a 3D image, so you can look at it. It’s not telling you anything about how it works or helping simulate it. It’s like just looking at a picture of the inside of a computer and trying to work out how it works from that. So it really is an astronomical task to try and come up with a genuinely computational brain.
So is the idea of uploading a human consciousness into cyberspace as sci-fi now as it was when William Gibson wrote Neuromancer thirty years ago?
There is technology at the moment that’s aimed at brain-computer interfacing, and there’s some really wonderful work trying to help people who’ve had strokes and things, who are paralysed. There have been examples where people have implanted a little sensor into the motor cortex of a paralysed person’s brain, and they’ve been able to learn to operate a robotic arm by thinking. So the foundation for getting something out of a living brain is certainly there.
But again, to read the entire state of a brain and a consciousness and then store it is a completely different ball game. Again, the complexity is so huge… People have estimated that the amount of ‘wiring’ in a human brain, just to connect the clever bits together, is about 100,000 miles. Just for a cubic millimetre of brain you’re looking at several kilometres of interconnects. It’s stunningly complicated.
You also have to bear in mind that it’s a biological system, so you’re not just reading voltages or something. You have different kinds of cell, and you have genes encoding for thousands of different proteins, and they’ll express proteins depending on what the cell has to do. Cells in your eye, for example, will express proteins that are light sensitive, which then combine together to make machinery inside the cell that help it do what it’s supposed to do. How much of that you would have to read in order to store the state of a person isn’t really clear, but the chances are that it’s an awful lot of information. You’d need one hell of a computer! It’s not even clear how long Moore’s Law (the idea that computer power doubles every eighteen months) is going to hold out.
And is there any sort of anti-AI Luddite resistance movement against the sort of work that you do, as there is in Transcendence?
Not really. There is something called The Cambridge Centre For Existential Risk which is interested in looking at low probability but potentially catastrophic events. Out-of-control AI is one of the things that they consider. But they don’t protest our offices or sabotage our equipment. And long may it stay that way. I don’t need that kind of aggro.