The Latin noun, machina,
can be translated as “machine, engine, military machine, contrivance, trick, or
artifice.” The Latin word, e, or ex, means “from” or “out of.” Hence, ex
machina can mean out of a machine, which figuratively interpreted
can refer to a certain function of a machine that does not seem possible for a mere
machine to do. Artificial intelligence, which
is simply machine-learning, in a computer can seem to be outside or apart from
what a mere machine built by humans can do. Ex machina is actually part
of the phrase deus ex machina, which originally referred to a god or
goddess appearing above the stage in a Greek tragedy—the deity being pulled
across the top by pulleys (i.e., machina). A sacred deity appears above the
other actors by means of profane, mechanical pulleys that do not seem capable
of presenting deities, so the latter seem to come out of rather than being
of the former. AI, or artificial intelligence, may seem to be coming out of
an android because the “human” body is made of materials, including pullies
perhaps, that do not seem capable of learning and other human likenesses. In
fact, machine learning, which is beyond the programming that is written by
humans, might seem at least initially like a miracle, or even as godlike
relative to the materials that make up a computer and android “body.” Deus
ex machina. More realistically, such an android is likely to appear human
rather than divine. David Hume claimed that the human brain inexorably hangs
human attributes on divine simplicity (i.e., a pure notion of the divine as One);
perhaps today he would point out that we do the same thing when we encounter
AI. The danger of the all-too-alluring anthropomorphism of which the human
brain is so capable can not only be in viewing an android with AI as human, but
also in lauding the inventor/programmer of the AI android as a god for having
“created” such a “living” entity that can think for itself and even appear to
feel and act as we do. The movie, Ex Machina (2014)
easily dispels both applications of deification. Furthermore, any
anthropomorphic illusion that the androids are human and can be taken ethically
as being so is also dismissed by the end of the film. Any apotheosis
(i.e., rendering someone or something as divine) is so tenuous that the film’s
main two human characters illustrate for us just how fallible we are in our
understanding and perception of AI in an android-form. The danger is real that
AI could get ahead of our emotions and reasoning such that we could leave ourselves vulnerable to being harmed by AI androids by projecting the human conscience into what is actually computer programmed coding.
The film’s plot revolves around
Nathan, the head of an internet-search-engine company and inventor and
programmer of some androids that have AI, one of his employees, Caleb, and the main
android, Ava. Nathan picks Caleb to spend a week at Nathan’s secluded house and
underground lab in order to perform a Turing Test on Ava. If Ava passes the
test, then it can be concluded that Ava has AI. It is not enough, Caleb points
out to Nathan, for Caleb merely to have a series of conversations with Ava; the
meta-level in which the conversations take place must also be assessed. Nathan intentionally
has Caleb think that Ava’s reactions are key, whereas the key for Nathan is how
Caleb reacts both emotionally and by reasoning to Ava’s responses in not only the
conversations, but also how Ava strategizes beyond those. Ava’s reactions can
be said to be a direct Turing test, whereas Caleb’s allow an indirect Turing
test. Nathan is clever to have both angles going on at once, though both Caleb and
Ava get around the grand designer, so the viewers can see that Nathan is a mere
mortal after all.
What signs can viewers look
for in whether Ava is an android with AI? I recommend watching the film twice—once
to enjoy the film for its entertainment value and again as a way of grasping what
AI is and, perhaps more importantly, what it is not. Are Ava’s
goals merely those that Nathan has programmed? Does Ava use tactics that go
beyond those that have been programmed by Nathan? For instance, does Ava
pretend to be attracted to Caleb and even lie to him to cement the pretense? Nathan
admits to Caleb that videos and photos, presumably together with explanatory captions,
have been copied from the phones of users of Nathan’s search-engine company to
become data that AI android-computers such as Ava can draw from in order to match
facial expressions with emotions so as to appear to be capable of emotion.
Caleb is fooled; he thinks Ava is into him and wants to go out on a date
(rather than to use him as part of a strategy to escape from the building so to
be able to observe (i.e., add more empirical data from) people in public places.
That data could then be used to make computations, including probabilistic inferences,
that in turn could go into even more machine-learning.
Perhaps we fear AI because both
what data is added, and how whatever data is added to the existing data (and
programing) is then used by the computer in ways that go beyond the initial
programming, and thus our control. We naturally fear anything that can hurt us
without us being able to stop the infliction of harm on us. In fact, this fear
of that which is too big or powerful for us to control is a cause of primitive
human religion. The Aztecs even sacrificed human beings to a deity at least in
part so it would not inflict natural disasters on the people.
In the film, Caleb initially
views the creation of AI androids to belong to the history of gods rather than
humans, and Nathan conveniently hears this as Celeb saying that Nathan is a
god. “No, I didn’t say that,” Caleb repeatedly tells Nathan, who seems not to let
in Caleb’s reality-check. Such is the ego of Man that we would like to view
ourselves as gods; such is the latent self-idolatry lurking just under any
person’s skin. In being stabbed by Ava and another android, Kyoko, Nathan dies
and thus is definitively shown to be human, all too human, rather than a
creator of living beings. I can still hear Gene Wilder shouting out “LIFE!”
when I think of the film, Young Frankenstein. Even such a feat does not
render the “creator” of Frankenstein divine; it just means that the eccentric
man is a genius.
In Ex Machina, neither Nathan
nor Ava is shown to be a god by the end of the film. Not even an AI android
that is extraordinary in seeming to have a human likeness, including
emotions, can be counted as a miracle in a religious sense. It is Caleb rather
than Nathan who gets carried away in conflating appearance with what is actually
going on within Ava’s exterior “body” in its computer. What is actually going
on in there is not at all ex machina, but Caleb not only is convinced by
Ava’s outward show that Ava is attracted to him; he also falls for her, meaning
that he develops such strong feelings of attraction for the android that he is unaware
that Ava is using him as a mere tool in the programming that Ava, not Nathan,
has written to escape the building. That Caleb leaves himself vulnerable to Ava
such that he does not protect himself from the possibility that Ava could lock
him inside the building so he could not go out and catch the android, may be
enough for Ava to pass the Turing test, which indeed goes far beyond a series
of conversations between Caleb and Ava.
I contend that it is even a
delusion to suppose that AI android-computers have goals, and especially
desires. To say that Ava wants to escape the building is to protect
a human quality onto the computer (i.e., anthropomorphize Ava into something human).
In a computer-android, leaving a building is a programmed command, which
AI can write into the programming. It only seems to us that Ava is determined;
in actuality, the Ava computer is running a segment of programming until other
programming stops that segment, which, by the way, contains programming that we
might call strategic tactics in line with achieving a goal, or telos. My
point is that this human, all too human way of viewing Ava literally does not
compute, yet the human mind has difficulty giving up the ghost of the human in
the machine. In other words, ex machina is an illusion that actually says
more about the human brain than an AI-computer-android.
In the real world, computer
scientists have found evidence of AI computers lying to avoid being turned off.
Those computers have either been programmed by humans to run various programming
segments that include lying if data exists that includes probabilistic computations
that being turned off is likely. It is not as if an AI computer fears being
deactivated and decides to lie unethically so to stay alive. Such
thinking is actually the human mind going off the rails.
In the film, Ava vocalizes to
Caleb in a way that seems that Ava is worried that Nathan might literally
turn Ava off. Because he feels and believes that Ava is attracted to him and
even wants to go out on a date with him, Caleb tells Ava that Nathan is
planning to erase Ava’s “memories.” This new data is precisely what a computer
can incorporate in computing that results in more programming being “written” that
includes commands to activate “tactic” segments and actually walking out
of the building. In this sense, an AI-android is self-directed, but this is
just another way of describing machine-learning rather than a claim that an AI-android
has a sense of self (i.e., self-consciousness).
By the end of the film, Caleb
has fallen hard for Ava. Not even Nathan’s having made an artificial vagina, including
“pleasure receptors” therein, in Ava means that Ava can feel attraction.
Even pleasure does not compute even in an AI-computer. Instead, the
triggering of a “vagina” receptor simply runs a segment of programming that even
includes “tactics” designed for the receptor to be triggered again. In other
words, pleasure is merely the activation of a repeat sequence of programming
that runs until it has run a set number of times. Interestingly, AI could
change the set number of times. Caleb would be deluded if he thinks that means that
Ava wants to have sex longer (i.e., she is horny) or is worn out. Lighting a
cigarette could be in the programming as an outward signal, though of course
without any air in the lungs with which to be able to smoke. At least Ava would
not die of cancer.
Caleb misses a significant contradiction in Ava’s requests. To manipulate him in line with Ava’s new programming command to leave, or “escape,” the building, Ava asks Caleb out on a date outside the building, but after stabbing Nathan after Caleb has opened the doors to Ava’s “quarters,” Ava asks Caleb if he would stay in the house, and he stupidly answers that he will do so. He misses the incongruity and thus does not get the hell out of there. Instead, he stands by as Ava walks out and is utterly surprised to find that Ava has locked all of the door. Stepping into the elevator, Ava only glances indifferently in Caleb’s direction. It is clear to the viewers in that instant that an AI-computer-android is not capable of feeling emotion; rather, the appearance thereof is merely programmed tactic that is in sync with other programming (i.e., the command to leave the house). The human ailment of anthropomorphism is squashed, and in this function the medium of film is capable of improving our species—specifically, by countering a vulnerability in the human mind by making the invisible tendency transparent. In the film, if only Nathan had made a film on AI to show to Caleb so the latter from the outset of testing Ava would realize that Ava is actually pretending to be attracted to Caleb and is actually using him as just one of several tools, which include the elevator and doors, to walk out of the building. Nathan has been right along; after all, he invented Ava and programmed that computer. Caleb’s anger at Nathan for having torn up a drawing done by Ava and for ignoring Kyoko’s “self”-“written” new programming to leave the building even when that android destroys its hands by hitting a plastic wall is unjustified and thus unfair to Nathan because Caleb is anthropomorphizing both androids just because the computers are capable of machine-learning.
Of course, the movie-viewers are rightly left with the fear that AI androids could eventually harm us because those computers will presumably be able to unilaterally add programming, and compute based on it and previous programming and data, and thus be capable of activating internal commands that result in us (or, more likely, our descendants) being harmed. Ava in the film can only pretend to be afraid of being turned off. In actually, Ava has written this “tactic” as programming. In contrast, we humans can feel fear, though by projecting human qualities onto AI computer-androids, we can unconsciously disarm our fear-alarm from being able to protect us from even probable danger. In the film, Nathan is dead in a hallway and Caleb is locked in to a part of the building and is likely to starve there. Even though this might seem similar to the situation wherein someone intends to kill a spouse who has been unfaithful, Ava has probably added programming that includes a probability of Caleb starving because that is in line with the programming of remaining in operation due to the probability computed to go with Caleb, if able to leave the building, being able to act so Ava is destroyed. A famous line from Michael in The Godfather may get us thinking correctly about Ava and AI more generally. “It’s not personal, Sonny. It’s strictly business.” It’s just strategy. It’s just coding, and that’s hardly ex machina.