As Professor of Cognitive Robotics at Imperial College, London, Murray Shanahan is an expert in artificial consciousness and global leader in artificial intelligence (AI). His book, Embodiment and the Inner Life (2010), helped inspire the Oscar-nominated screenplay for Ex Machina (2015); Alex Garland’s critically acclaimed science fiction film.
I took the opportunity to sit down with Murray for a discussion about his work on the film, and delve further into just how far we are away from achieving true humanoid AI…
**SPOILER ALERT for those of you who have not yet seen Ex Machina!**
So Ex Machina centres on Ava, a humanoid AI. What methods are there to create this type of AI?
I guess there are two broad approaches when building AI, a hardcore engineering approach, and an approach where you try to copy biology. When we wanted to make flying machines, for example, Leonardo Da Vinci had amazing drawings of machines with flapping wings mimicking biology, but it turns out it’s better to make something with fixed wings and propellers. Therefore, in trying to build human level AI maybe we should be aiming for the artificial; to engineer it from scratch. Frustratingly, we don’t really know how to do that, so the best way to do it is to try to copy nature so that in trying to understand exactly how the biological brain works, we can build something that conforms to similar principles.
The idea of building a brain is baffling. How could we ever recreate something so complex?
You could try to replicate a specific brain, say… the brain of a particular mouse. You could imagine having technology which enables you to scan that brain in exquisite detail so you know exactly which neurons are where, and how they’re connected…You can then imagine trying to recreate that brain in a computer and simulate all of those neurons, so you wouldn’t have to understand the thing as a whole… how the intelligence of this mouse emerged from the interplay of these individual components. And you’d be cheating, really. It would be a brute force way of reproducing something that behaved like that mouse.
Even then, there must be processes going on in the brain that just studying the structure would miss? Subtle chemical reactions, perhaps?
Well, I’m not sure we know the answer to that yet. But there’s a procedure called deep hypothermic circulatory arrest, where a person’s blood is cooled to the point where there’s no cerebral activity left. A person can be revived by recirculating blood back around the brain resulting in return of cerebral activity. This suggests you don’t need ongoing electrical activity; you can stop it completely like a computer and then restart it and it’s got all the memories there. But you’ve got complex electro-chemical reactions going on, and whether you can extract all the necessary properties from those visual images…? I’m not totally sure.
So, we know that your theories of embodiment helped to inspire Ex Machina, could you explain what you mean by ’embodiment’?
What I mean by being embodied is having a
body… so just the surroundings I have now. I’m in the centre of this collection of things and I can interact with them in all kinds of ways. Here’s my mug and there’s your microphone, and I can put your microphone in my cup (he doesn’t, fortunately). So embodiment is having a spatial location which is the centre of our sensory motor loop, centred on the body… that’s really what the brain has evolved for. To enable us to move around in this world, interact with complex objects which themselves have complex dynamics.
So what is an embodied AI?
It’s basically a robot. We’re embodied because we have these things with four limbs and the head and the eyes and all our sensory apparatus. So if we make something that has a body in the same sense – a robot – then that’s embodied as well. By contrast, Siri or Cortana or Samantha in the film Her (2013), are disembodied. The AI lives in the computer and doesn’t interact with the world by moving in it or manipulating the things in it. So Ava is an embodied AI, and Samantha is a disembodied AI.
Is there any benefit to creating a human-shaped AI?
Well, I don’t think it’s necessary to have a human-shaped body to make an AI with human level intelligence. But if we want to make something that interacts with us then of course we’re going to want to give it human form. And, practically speaking, if a robot has human form, it finds it easier to interact with the things in our world that we have made for other human shaped bodies. But if we’re thinking aside from science fiction where it makes for great storytelling to have humanoid robots, there are plenty of reasons why we might want to make robots that are not remotely shaped like humans. If you want something to go under the sea and clean up pipes, or go into a nuclear power station, for example.
How about a driverless car?
A driverless car is a great example because you don’t actually have a humanoid robot sitting on a seat holding the steering wheel. There’s no steering wheel and there’s no driver’s seat, but the car itself is the robot. It’s a robot because it’s situated in space… it moves around in space, it doesn’t interact with objects in quite the same way that we do because it can’t pick things up, but it nevertheless is like a robot and it has senses for working out what its surroundings are. So it is an embodied AI but not remotely human shaped. It’s car-shaped.
So how did you come to be involved with Ex Machina?
When Alex [Garland] first contacted me, he already had his complete script, but it was influenced to some extent by a book I wrote called Embodiment and the Inner Life, which he read whilst he was writing. The way he put it was that his ideas “crystallised” after he read it. So he asked to meet up, he gave me the script wanting to know whether it hung together for somebody working in the field. And it did! It was very convincing. It read like a novel, and was quite a page turner.
Are any aspects of Ava completely unfeasible?
Well we have no idea yet of how to build a robot body that has such fluid movements and is able to interact with objects in such a natural way. But I think of that more as an engineering problem. Robots are improving all the time in terms of their physical abilities, so in terms of what’s done technically, I don’t think there’s anything infeasible, though it may be decades before we get to that point.
Ava’s intelligence is based on data collected from ‘BlueBook’, a vast global search engine, described as ‘a map of how people think’. Could this type of data be used to create AIs?
Now we can move away from science fiction and talk about what’s going on today. Machine learning is a very hot topic in AI, and a major application is to try to understand human behaviour for the purposes of say, advertising, which accumulates huge amounts of data from people’s activity on the internet; social media, browsing habits etc. Alex is very astutely tuned into something that is actually occuring, which is that maybe the way to build human level AI is to exploit that enormous quantity of data we’re putting out there all the time.
Last year, Stephen Hawking and Elon Musk were awarded the ‘Luddite Award’ by the Information Technology and Innovation Foundation for scaremongering about AI. What are your thoughts on this?
I can’t think of anything more stupid than labelling them as luddites… it’s a bit like criticising the people building nuclear power stations for saying you should put the rods in lead. Of course we don’t want meltdown! You have to think about the safety issues. I think unfortunately the media picked up some soundbites and turned them into an alarmist story with lots of pictures of the Terminator. The point that [Hawking and Musk] are making, however, is that we should ensure that as we make more powerful AIs, it’s safe and beneficial. I absolutely love science fiction! It inspired me to do what I’m doing and it asks many really important questions. But it’s still science fiction. I love the Terminator movies but that’s only because they make very good films. I think I like them less now because I’m so sick of seeing Terminator images…
Did Ex Machina do a good job of exploring the dangers of AI?
I love Ex Machina so I’m terribly biased! The genius of it is the degree of ambiguity at the end. If we immerse ourselves and suspend disbelief in that fictional world, Nathan made a mistake, and the AI is evil. But from Ava’s ‘perspective’, she’s imprisoned with the prospect of being murdered – so she escapes. What else can she do? In this way I think it empowers the viewer to think rather than imposing opinions onto them. If you can go to the pub having seen the film, then spend the rest of the evening arguing with each other about whether the AI is or isn’t conscious, I think that’s a mark of a great film.
Hilary Lamb is studying for an MSc in Science Communication.