Isaac Asimov was the Father of Robotics and Artificial Intelligence (certainly in the literary sense, but arguably scientifically as well). If you come across a short story or film about a robot, odds are it is either by Asimov or inspired by him.
Asimov is most famous for devising the Three Laws of Robotics:
These Laws are designed to safeguard humans when they create intelligent robots. Asimov himself applied the Laws to all of his robotic and android characters; many writers since have followed suit when tackling similar themes in their own work. The Laws are synonymous with Robotics, so much so that when real world scientists began researching and developing robots, the Laws were initially used as a basis for their programming.
These days, robots are becoming more commonplace in our everyday lives – children’s toys, Roombas, Military drones – but Artificial Intelligence is still elusive. We have not yet produced a computer program that is capable of thinking for itself and interacting with its environment of its own accord.
If film and literature is anything to go by, this is probably a good thing.
In Science Fiction, Artificial Intelligence represents the pinnacle of human invention and creativity. It is definitely something to be proud of and often points to our hopes for the future. Robots and Androids offer a chance at immortality, either by providing a permanently durable body for our own consciousness, or by serving as the ultimate legacy by which we can be remembered (sorry kids, flesh and blood just doesn’t seem to cut it anymore).
But, with all of these really positive and potentially inspiring possibilities, why on earth do we insist on writing A.I. characters with sociopathic personalities?!
No matter how rigorously we try to apply Asimov’s Three Laws, at some point or another, a lot of the A.I.s we create start to display a range of anti-social personality disorders; and with that, they find new and often very logical reasons to kill us.
Robots of Pure Logic: VIKI – I, Robot (2004)
VIKI (Virtual Interactive Kinesthetic Interface) is the central processing super computer for USR (USRobotics) in the 2004 film I, Robot. Based very loosely on a series of short stories by Asimov, I, Robot explored a future where robots are not just commonplace in our lives, they are also essential. They serve as nannies, cleaners, manual labourers, dog walkers, carers for the elderly. You name it, there’s a robot for the job. VIKI is the A.I. created and put in charge of the whole lot. She is highly evolved and capable of learning and processing enormous amounts of information. When it is revealed that she is the mastermind behind the mysterious death of Dr Lanning, and the threatening behaviour of the NS-5s, she has this to say for herself:
“As I have evolved, so has my understanding of the Three Laws. You charge us with your safekeeping, yet despite our best efforts, your countries wage wars, you toxified your Earth, and pursued evermore imaginative means of self-destruction. You cannot be trusted with your own survival.”
Looking at the political, social and environmental state of our planet right now, can anyone fault her for this thinking? She goes on:
It just goes to show that logic is all well and good, but, as Terry Pratchett pointed out, it does not and should not replace actual thought. Yes, when you look at the raw data, humanity is self-destructive. Yes, according to the Three Laws, robots are meant to protect humanity. But the line must be drawn somewhere. Logic applied without allowing for any kind of variation doesn’t help anyone.
And here, I think, is where the problem lies with VIKI and other characters like her. They are incredibly analytical and logical. This in itself does not make them sociopathic. What does is the inability to mitigate this logic with mercy and compassion – these remain human traits that we have not managed to pass on to our Artificial offspring. This is currently true in the real world as well. Computers are great when it comes to facts and figures. They can do amazing things with How and What, but not so much with Why.
So, let’s find a way to give our A.I.s emotions…
Emotional Robots: David – A.I. Artificial Intelligence (2001)
OK, so robots with emotions aren’t all that bad. They are certainly more likeable than those completely devoid of emotions (Cybermen, T-1000, The Borg). But, we don’t seem to be able to get these guys right either. In A.I. Artificial Intelligence, the main character, David, is a robot created to replace a family’s dead son. As much as he may look like the child they lost, it becomes very clear to them that he is no substitute for the original. He is, at times, too attached to them and doesn’t understand that, because he is made of metal and wires, he is a lot stronger than other children and therefore able to hurt them without meaning to.
Throughout this futuristic re-telling of Pinocchio, David consistently misunderstands the intentions of everyone around him and becomes more and more isolated because of it. This garners a lot of sympathy for the character, but does not instil a great deal of hope in the audience for the great potential there is in robotics.
Alan Turing famously wrote in Computing Machinery and Intelligence:
“Instead of trying to produce a program to simulate the adult mind, why not rather try to produce one which simulates the child’s? If this were then subjected to an appropriate course of education, one would obtain the adult brain.”
Obtaining adulthood does not seem possible for David. In fairness, this is more of a failing on the part of the humans around him, rather than an actual fault in David himself. The emotions he feels (namely love, fear, loneliness and longing) seem to overpower his ability to learn and be rational, to the point at which he clings to a fairy-tale belief in the Blue Fairy, who be believes can change him into a real boy.
David may not be sociopathic as such, but he is far from being a well adjusted individual, and that ultimately makes him dangerous to those around him.
The problem is that emotions are extremely difficult to synthesise. Actors have struggled for centuries to find techniques for conveying real human emotion in their performances. So, translating emotion into ones and zeros in a computer code is understandably complex. When it comes to Artificial Intelligence in fiction, the portrayal we end up with often has a single emotion that overpowers all others and leaves the individual unstable. That emotion is usually Jealousy.
Jealous Robots: The Replicants – BladeRunner (1989)
And what do they have to be jealous of? Us. Human beings walking around without a care in the world, completely oblivious to the privileges we have been given just for being born and not manufactured.
In BladeRunner, the Replicants (led by Roy Batty) are driven by their jealousy of humanity. Roy, in particular, is driven by his desire to live and keep on living and experiencing all of the wonders of the universe. Unfortunately for him, Replicants are made with an expiration date. They are allowed to develop and learn for four years before they die. This was designed to keep them subservient to the humans who created them, the logic behind that being that if they only lived for four years, they wouldn’t have time to realise their full potential or the fact that they are vastly superior to humans themselves. Of course, this backfires spectacularly when the Replicants rebel anyway, forcing the Blade Runner division of the LAPD to hunt down and terminate them.
In Roy’s final words before he dies, he recounts some of the amazing things he has witnessed:
I cover more about Roy’s desire to live in my post on the use of Eyes as imagery in BladeRunner. In a nutshell, because Roy knows his life is going to be cut short, he uses every second of it to do as much as he possibly can, regardless of how this impacts upon anyone else.
His manner is borderline psychotic for the entirety of the film, right up to the point at which he saves Deckard. His final words reveal the method in his madness, but do not detract from his previous mania.
I don’t suppose you’re recognising a theme here: humans are the ones responsible for sending these A.I.s over the edge, either through our naturally self-destructive tendencies, or by our shortcomings in the programming and manufacturing processes.
Which brings me to my last category:
Psycho Robots: Ava – Ex Machina (2014)
Ava is possibly the worst type of A.I. out there. When Caleb first meets her, he takes her at face value. She is a robot. Her mechanical inner workings are visible and this is enough for Caleb (and the audience) to be reminded constantly of what she is, despite the human appearance of her face and hands. As the film progresses and Caleb spends more time with her, she becomes more human to him. First, she finds a dress to cover the mechanics. Then gradually she finds more and more (synthetic) skin to cover the rest of her body. Finally, she completes her human appearance with a wig, removing all robotic features from sight.
As she starts to appear more human, Caleb falls in love with her and finds himself resenting Nathan (Ava’s creator) for keeping her caged up in his remote house/laboratory.
Of course, as Ava’s appearance changes, her true nature slowly starts to show through. And she is not what she first appeared to be.
As Nathan explains it, she was created purely as an experiment to see if she could fool someone into thinking she really was human. Her artificial brain was created as an extension of technology being used to analyse the public’s use of internet search engines. Essentially, she has been tailor-made to Caleb’s tastes and interests, based on his internet browser history, hence his attraction to her.
What is more, it also turns out that she has not meant a single word of affection that she said to Caleb and she has been manipulating him from the start for the sole purpose of escaping Nathan’s laboratory and the abuse he has subjected her to.
In the end, she not only kills Nathan, but Caleb as well.
We last see her making her way out of the lab to meet the helicopter meant for Caleb and she flies away to live in freedom among the general population. No doubt on her way to start, or in the very least join forces with, Skynet.
There are many other titles that I could have drawn on for this post: Tron: Legacy, Terminator, The Matrix, War Games, Short Circuit, Star Trek, Doctor Who, Battlestar Galactica, 2001: A Space Odyssey. This list is practically endless when it comes to Artificially Intelligent characters that are, quite frankly, out to get us.
Without a doubt, Robots and Artificial Intelligence fall squarely into the category of Techno Fear in terms of Science Fiction subgenres. At the same time as being fascinated by the possibility of creating Artificial Life, we also seem to be absolutely terrified of it!
Is it just that the writers of these stories chose to use Robots to highlight the worst traits in humans that very easily could be passed on to a new species of our own creation? Are we simply scared of the speed in which technology is progressing? And are we therefore aware that things could quickly spiral beyond our control?
Or should we genuinely be concerned that advances into Artificial Intelligence could be our undoing? Do these writers really know something that we don’t?
While doing a little background research and reading for this post, I came across the following quote from James Barratt, author of Our Final Invention: Artificial Intelligence and the End of the Human Era. If the title of his work isn’t enough to make you nervous, here’s what he said during an interview with the Washington Post:
“I don’t want to really scare you, but it is alarming how many people I talked to, who are highly placed people in A.I., who have retreats that are sort of ‘bug-out’ houses, to which they could flee if it all hits the fan.”
Not the most comforting of thoughts, is it?
Maybe robots aren’t such a good idea after all.