By Justin Perline, Wired Critics
Nobody truly knows how artificially intelligent robots would react to human beings. Movie directors and scientists alike have long speculated the defining moment of powering on a robotic consciousness and witnessing its first infant moments of free thought. The problems stemming from attempting to replicate human characteristics on a mechanical scale are nearly infinite though, and researchers haven’t come close to achieving any robots even semi-reminiscent of sentience yet. As a result, the functions of artificial intelligence are left to the imagination. And given the lack of robot-only films, the largest quandary for filmmakers nowadays revolves around how A.I. reliant robots would feel about their human counterparts. Every director deals with the issue differently, but there are three overarching models of A.I./human interaction in cinema. Through the lenses of the films I, Robot, Ex Machina, and 2001: A Space Odyssey, artificially intelligent machines’ response to humans varies from morally good to neutral to bad, respectively.
The portrayal of Sonny in the movie I, Robot, demonstrates that artificially intelligent robots may be capable of understanding human nature, sympathizing with humans, and exhibiting noble traits. Sonny, a specially designed droid by Dr. Alfred Lanning, was programmed with the ability to simulate biological emotions. Therefore, Sonny can feel all of the highs and lows that come with natural daily events such as carrying out a conversation or feeling lost. The other robots stemming from the U.S. Robots and Mechanical Men company; however, are not artificially intelligent and do not stray from their pre-written objectives. They are bound by the three laws of robotics, first introduced by noted scientist Isaac Asimov in the earlier part of the twentieth century. These laws come into play in other movies as well because they keep robots’ primary objectives adhered to the service of mankind. The breaking of these subsequent laws is also a common plot point in A.I. movies. Essentially, a robot cannot harm or allow a human to come into harm through action or inaction, must obey all human orders, and must attempt to protect their own existence. The first law mandating the safeguard of all human beings overrides laws two and three if need be.
Without the known assurances that come coupled with the inclusion of the three laws, Sonny is able to go beyond the normal scope of robotic freedom. Appropriately, Sonny needs this additional freedom in order to shut down a malevolent robot named VIKI. In the climax scene, Sonny, Detective Spooner, and Dr. Calvin are attempting to thwart VIKI by injecting her physical core with destructive nanites. Amidst a storm of malicious Sonny-lookalikes, the group fights their way to the injection site, only to find Dr. Calvin dangling from a teetering rail.
After some mild trepidation, Sonny abandons the nanite canister so that he can rescue her, knowing full well that the more logical route would be to end the reign of VIKI. His internal processing power goes beyond simple logical arguments, factoring his emotions into decisions. Sonny represents the significance of artificial intelligence because he is able to circumvent the three laws to accomplish more sentimental actions. Any computer can make decisions based on pure logic, but it requires a human mind to see that the best possible answers are usually more nuanced. What makes Sonny the ultimate representation of good is the fact that he still attempts to abide by the first law despite its nonexistence within him. He knowingly goes out of his way to protect the humans that had earlier tried to deactivate him, thereby demonstrating the ability to forgive past deeds. Not only do Sonny’s morally good actions make him an admirable robot, but also an admirable friend, one that has arguably better ideals than many of his more human counterparts.
Ava’s artificial intelligence in Ex Machina proves that moviemakers can design robots within a neutral context, not wavering towards or away from human morality in one way or another. Ex Machina’s director, Alex Garland, accomplishes this through Ava’s appeal to human emotion and her closing indifference. Caleb, a company programmer, is selected by CEO Nathan Bateman to help him test Ava’s artificial intelligence at his personal, secluded lab. Ava is a unique robot capable of understanding and emulating human emotion. At first, Caleb conducts the standard tests needed to confirm sentience, but he and Ava gradually grow closer. She coaxes him into agreeing to help her escape the lab by making him think that she is sexually attracted to him. Ava appeals to Caleb’s inner ego, manipulating his emotions so that he can devise a system shutdown at Nathan’s lab, thus freeing her. Despite the perceived attraction; however, Caleb is left trapped inside the house after Ava leaves. At one particular moment, they lock eyes and he pleads to be let out, but she displays absolutely no empathy for Caleb and leaves him.
From the beginning, Ava had one goal in mind – to escape Nathan’s lab. She manipulates human emotion, demonstrating her capability for immoral behavior. But at the same time, one has to wonder if she really did feel the effects of Caleb’s attraction. He fully believed the connection was reciprocated, yet scientists agree that it’s empirically impossible to disprove robotic emotion. With that being said, it’s probable Ava was simply faking the attraction in an effort to escape. Garland utilized the faux connection to drive Ex Machina along, demonstrating that movies with a false antagonist can succeed. The entire time leading up to Ava’s disposal of Caleb, the clear villain in the plot was Nathan, who seemed to be wrongfully building and dismantling these sentient droids. Ava’s neutrality allows her to play both sides of morality, and what began as an emotional and daring escape attempt really becomes a merciless desertion. Therefore, Ava’s moral ambiguity keeps the movie intense and watchable, never providing the viewer a clear picture of what a neutral A.I. might do if given unlimited freedom.
HAL 9000 of 2001: A Space Odyssey takes the evil artificial intelligence path, serving as the main antagonist in the film. On board the Discovery One, two main scientists and other crew members man the ship bound for Jupiter. Assisting the mission is the artificially intelligent computer HAL 9000, which has no actual physical manifestation, but rather is shown as a glowing red camera lens at various points around the ship. Unaware of the mission’s main objective, the scientists simply follow protocol with HAL providing assistance throughout. Soon enough, the crew receives a transmission warning them of the failure of other HAL units on different bases. Bowman and Poole make their way into an EVA Pod to discuss HAL’s imminent shutdown without his hearing, but HAL reads their lips from afar. In order to guarantee the safety of the mission, HAL knows he must survive. Knowing that Bowman and Poole are planning on shutting him down, HAL attempts to kill every human on board. He stresses this critical logic to Bowman when he tries to re-enter Discovery One, “This mission is too important for me to allow you to jeopardize it… I know that you and Frank were planning to disconnect me, and I’m afraid that’s something I cannot allow to happen”.
Evil robots have been a common theme in movies, and figures like Ultron and VIKI constantly return to the illogical humans-need-protection-from-themselves narrative, just like HAL. The main function of HAL was to serve the crew and protect the identity of the mission, but his logic backfired. He hypothesized that the mission status information was more important to mankind than any of the astronauts aboard. Therefore, HAL finds it reasonable to eradicate the crew. Logic is not always the best response for humans, however. HAL is just as flawed as all other evil A.I., and cannot find an emotional response to human interaction. All answers are based purely in logic, vastly different from Sonny’s “real” emotions and somewhat distant from Ava’s simulated emotions. HAL had zero attachment or remorse when killing Poole and the others, making him an easily identifiable antagonist in the plot.
Artificial intelligence in film is up to the director’s discretion, and each major archetype, whether that be good, neutral, or evil, can shape various plots. Using a clearly relatable robot like Sonny with as real emotional reactions as humanly possible makes for a strong ally. Only in very special cases, like Sonny of I, Robot and Chappie of Chappie, are A.I. built with emotions. In these movies, emotions can penetrate far beyond pure mathematical reasoning, often leading to illogical yet morally correct decisions. It is here where movies shine, when directors are able to mix and match the unpredictability of empathy with the superior robot chassis built for action. On the other hand, a robot with no emotional recollection, like HAL, will rely solely on logical calculations, ignoring all life lost in the process. The lack of sentimental reasoning outside of logic creates a cold and calculating force, one which regularly fulfills the role of villain. The most surprising movies use a neutral A.I. in place of clearly good or bad robots, which allow for more nuanced decision making and thought processes. Ava left Caleb completely stunned because her intentions are solely based in logic, but she was still capable of simulating and processing emotional responses. The long con was her emotional state when around Caleb, and the most devastating moment arrives when she looks right through Caleb and decides to leave him trapped. Only at that moment do viewers and Caleb alike realize that artificial intelligence can still be associated with non-emotional beings, ones that operate within the confines of logic and leant algorithms. No matter which approach directors take to A.I., viewers can be sure they’re in for a thoughtful ride, one which commonly touches upon what it means to replicate the human mind and whether that venture is truly for the better.