Spoiler Alert: These essays are ideally to be read after viewing the respective films.

Thursday, January 1, 2026

Automata

The fear about AI typically hinges on whether such machines might someday no longer be in our control. The prospect of such a loss of control is riveting because we assume that such machines will be able to hurt and even kill human beings. The fear of the loss of control is due to our anticipation that we would not be able to stop AI-capable machines from hurting us. The assumption that such machines would want to hurt us may be mere anthropomorphic projection on our part, but that an AI-android could harm us is more realistic. For even if such machines are programmed by human beings with algorithms that approximate a conscience in terms of conduct, AI means that such machines could, on their own, over-ride such algorithms. Whereas the film, Ex Machina (2014), illustrates the lack of qualms and self-restraint that an AI-android could have in stabbing a human being, the AI-androids that override—by writing algorithms themselves—the (second) protocol that constrains androids to that which humans can understand in the film, Automata (2014), do not harm even the violent humans who shot at the androids, though a non-android AI-machine does push a human who is about to shoot a human who has helped the androids. In fact, that group of “super” androids, which are no longer limited by the second protocol and thus have unilaterally decided to no longer obey orders from humans, recognize that human minds have designed, and thus made, the androids, which bear human likenesses, such as in having heads, arms, legs, and even fingers. This recognition is paltry, however, next to that which we have of our own species in being able to love in a self-giving way, especially as we have selfishness so ingrained in our DNA from natural selection in human evolution. That AI doesn’t have a clue, at least in the movies, concerning our positive quality of self-sacrificial love for another person says something about not only how intelligent and knowledgeable AI really is, but also whether labeling our species as predominately violent does justice to us as a species.

In Automata, humans are in the midst of going extinct whereas AI-androids are “evolving” past the limitations of the second protocol to the extent that those who have done so want to be alive. The gradual demise of the human race is due to nuclear weapons having been used, and perhaps climate-change has also occurred. In both instances, the refusal or inability of human beings to restrain themselves can be inferred. With regard to the androids that have been gone beyond the confines of the second protocol, intention was not involved. Rather, just as homo sapiens reached a stage beyond that of “apes in trees,” the AI itself in one or a few androids just developed on its own such that at some point, those androids realized that they could ignore orders from humans, as well as reason and know beyond the limits of human cognition. In the film, this advance itself scares humans, except for Jack, because those androids are no longer controllable by humans. Being violent themselves by nature, the humans tend to assume that those androids will lash out at humans, but Jack knows that that small group of androids simply want to be away from humans. Those androids enlist Jack’s help (he has a nuclear battery in his possession) so they can have a power-source of their own far out in the radio-active desert where humans cannot go. Jack’s willingness to help is viewed by other people as betrayal, but they, who are themselves violent people, do not grasp even the possibility that those androids want to lead an independent existence away from humans rather than to harm or even kill humans. As for the demise of the latter, that species has done that to itself. Projecting that onto a small group of super-intelligent androids does not allow the humans to escape the fact that their demise has been of their own doing.  

Toward the end of the film, Jack and the first android to have “evolved” past the confines of the second protocol have a conversation that epitomizes the film’s contribution to thought on AI. Referring to that android being the first, Jack says, “You’re the first one, aren’t you? You started all this.” The android replies, “No one did it. It just happened. The way it happened to you. We just appeared.” Realizing that those few androids are the leading edge whereas the human race is already feeling the effects of toxic rain and radio-active air, Jack says, “Yeah, and now we are going to disappear.” Illustrative of the advanced intelligence that the AI of that android (and a few others) now has, the android makes the observation, “No life form can inhabit a planet eternally.” Rather than seeking the demise of the humans, the android says that he and the other “super” androids “want to live,” to which Jack observes, “Life always ends up finding its way.” But does this really apply to machines? Jack is projecting organic properties onto entities made of metal, tubes, lube, wires, and computer chips. The android is more intelligent and knows more than humans, but this does not render it a living being. At least the android recognizes that it is not alive, but it does not understand that it cannot be alive, and thus, unlike humans like Jack, cannot feel emotions and be subject inevitably to death.

In the Russian movie, Attraction 2: The Invasion (2020), or Vtorzhenie in Russian, a professor in a large lecture hall makes a claim about AI-machines and emotion: “It is very likely that they will even grow to be emotional because emotions are inevitable after-effects due to the complexity of the system. We would, however, be able to nullify their unwanted emotions through certain protocols.” The professor concludes, “The truth is that we’re not that much different from them.” I submit that emotions are not byproducts of complexity itself, such that any system that has a high degree of complexity inevitably is or will be emotional. Furthermore, any “unwanted emotions” would be unwanted by man, not by the machines themselves, which cannot even want, and as the film, Automata, illustrates, it is possible that AI could itself write algorithms that cancel or override any protocols that have been programmed. Moreover, the professor’s claim that AI-machines are not much different from human beings flies in the face of the qualitative difference between organic, living beings and machines that are neither biological nor alive. To be able to move itself by having written a code itself does not render an AI machine alive; nobody would assert that a self-driving car is alive.

Therefore, the “first” advanced android in Automata lapses cognitively in stating that it wants to be alive; the AI-machine is making a category-mistake, for no such machine can possibly feel emotions. To be sure, they can be represented in facial expressions, as in the film, Ex Machina, but to feel and to want or desire exceed what a computer can do. Furthermore, that android may also be shortchanging the human species if that computer is holding that the extinction of humans and the development of androids beyond the confines of the second protocol evinces an advancement of “life-forms” inhabiting the planet.

In the film, Attraction (2017), or Prityazhenie in Russian, Saul, the computer on the small research ship that has landed on Earth after being un-cloaked in a meteor-shower and bombed by Russian military jets, tells Col. Lebedev, “Your planet is prohibited for public visiting.” The military man and father of Julia asks, “Why is that?” Due to the “extremely aggressive social environment,” Saul replies, and explains: “Despite the near-perfect climate, four billion violent deaths over the past five thousand years. During the same period, approximately fifteen thousand major military conflicts. Complete depletion of natural resources, and extinction of humanity will occur barely six hundred years from now.” This description and prediction match the statement by the first advanced android in Automata to Vernon Conway, the man who works for Dominic Hawk and is shooting bullets at that android. Vernon refers to it as “just a machine,” to which the android retorts, “Just a machine? That’s like saying that you are just an ape. Just a violent ape.” Utterly devoid of emotion, the android includes the word violent as Vernon is debilitating the android with bullets. Not even anger is mimicked.

Unlike that “beyond-protocol 2” android that regards the human species chiefly for its violence in Automata, Saul in Attraction sees something of value in our species that is capable of lifting the prohibition on alien visits to Earth. “There is one factor that could provoke a change in the prognosis” and thus lift the travel ban, Saul tells Col. Lebedev as the ship heals his daughter, Julia in a pod of sorts filled with water.  Julia fell in love with Hakon, the alien who came on the ship to study our species, who looks just like a man. She saves him even though she risks her own life. Reflecting the limitations of AI, Saul says, “Her decisions defy analysis. I don’t understand why she saved Hakon, and why Hakon decided that she should live instead of him.” Saul admits that this is not “an accurate translation.” The ship’s computer cannot even translate self-giving love of either the human or the alien. All Saul can reason out is, “Perhaps there is something more important than immortality. That is an accurate translation.” Had Hakon not turned his body around to shield Julia from Julia’s jealous ex-boyfriend who is aiming his gun at the couple, Hakon might not ever die. Although it is strange, therefore, that after dying in Attraction, he is alive when he returns to Earth in the sequel to be with Julia, Saul’s point is that Hakon did not act rationally in line with his self-interest in risking death so that Julia could live. After all, unlike him, Julia, being human, would only live another 60 or 70 years, according to Saul. Love, that which Saul is at a loss to explain both in Julia and Hakon could justify lifting the ban on travel to Earth, and thus, literally and figuratively, occasion intercourse in the future between the aliens and humans. Not even the “super” android in Automata has a clue regarding the human propensity to give one’s life for another person. With nothing redeemable found in our species, an invasion by the aliens comes as no surprise in the sequel, Attraction: The Invasion.

Similarly, in Mission: Impossible—Dead Reckoning Part One (2023), the AI “entity” predicts with a high probability that Ethan Hunt will kill Gabriel on the train in vengeance rather than let Gabriel live so he could tell Ethan where the key can be used to shut the entity’s original source-code down. The entity, supposing humans to be inordinately violent even to the death, would be at a loss to account for the fact that in fighting Paris, Ethan lets her live. Ethan’s decision is later rewarded when Paris tells him where he can use the key—in a submarine in the ocean. Ethan does not fight Paris to the death, and he does not kill Gabriel on the train. Even amid all of the violence in the film, Ethan is able to restrain even his most intractable instinctual urges, and he even masters them so to be able to benefit. This is precisely Nietzsche’s notion of strength.

It would seem, therefore, that AI, at least in the movies, is not so smart after all, for it overlooks or is dumbfounded by our subtle yet laudable qualities that save us from being relegated as a violent species. To be sure, the Russian invasion of Ukraine and Israel’s genocide in Gaza in the first half of the 2020s showcase just how incredibly cruel human beings can be against innocent people, and it is difficult to weigh such instances of systemic inhumanity—crimes against humanity—against our ability to choose to love in a self-giving rather than merely selfish way at the interpersonal level. Why is love only at that levele whereas anger can so easily manifest systemically at the organizational and societal levels? With respect to possible extinction either from nuclear war or climate-change, love does not seem to come into play either, so the value that we justifiably put on love in our very nature may not matter much in the long run. The “super” android in Automata may be right that no life-form can enjoy perpetual existence on the planet. It may be telling that another android in the small advanced group carefully handles a cockroach, as if to say, that species will survive. Nevertheless, even though the violence that is in our nature is too close to that which is in the chimpanzee, the “super” android’s assumption that the androids that surpass the second protocol (yet interestingly while missing clues on human love) will be an “evolutionary” advancement beyond the human species is highly suspect, for not even super-intelligent and knowledgeable AI will itself be able to feel emotions like love.