Lest the dystopian subtype of science
fiction be taken too literally as a predictor of how human civilization will be
likely to turn out, the underlying meaning of such films can be construed as
bearing on human nature, which, given the glacial pace of natural selection, is
very likely to stay pretty much the same for the foreseeable future. In Avatar
(2009), for example, the human proclivity to greedily extract wealth for
oneself or one’s company without ethical concern for the harm inflicted on
other people (or peoples) in the process underlies any assumed thesis concerning
space travel and whether we will eventually colonize other planets. The meaning
is much closer to home, in us and the regulated capitalistic societies that we already
have. Similarly, The Terminator (1984) can be understood less as a
prediction of a future in which androids enslave mankind and more as a snapshot
of how machine-like and destructive our species had already become. The
machine-like efficiency of the Nazis, for instance, in killing enemies of the
state and clearing eastern villages entirely of their inhabitants in such vast
numbers can be labeled as a state sans conscience. Thirty years after
she had graduated from Yale, Jill Lepore returned to give the Tanner Lectures
on fears stemming from that pivotal film of a robot apocalypse in which
machines rather than humans control the state. Besides predicting a highly
unrealistic future, Lepore’s orientation to prediction using the science-fiction
genre of film can be critiqued.
Yale’s Tanner Lectures are on
human values, and so too, I submit, are science-fiction films. So Lepore’s “inquiry
into what humans mean and intend in abandoning constitutional democracy and the
liberal nation-state by automation and government by machine” is better thought
of as an introduction into how or even whether the values underlying
representative democracy and those supporting automation conflict.[1]
Her assumption regarding the possible future abandonment of constitutional
democracy for government by machine is so draconian and absolute that the
prediction can be taken as highly unrealistic. The rise of the “tech-dominated ‘artificial
state’” that she deemed already underway can be reckoned as likewise overdrawn
and even hyperbolic, and her linkage to “the fall of animals” and even the
natural world seems more like science fiction than anything actual.[2]
Put another way, in her talks, Lepore went to extremes both in how she characterized
the current world and her rule-by-machines future for mankind.
I proffer another possible
perspective based on films such as The Terminator. Whether robotic or AI
machine-learning, machines differ fundamentally from humans. Descartes’ view
that humans are machines who think was heavily contrived and arguably based on
the illusion that consciousness (i.e., the mind) is qualitatively different and
only loosely related to corporeal bodies (i.e., materialism). We are organic,
corporeal beings through and through, whereas machines are not. It may seem
strange to think of a person’s conscience in such terms, but in this respect
too, we differ significantly from machines. In the film, Ex Machina (2015),
for example, the AI android has no conscience (even in being simulated by programming)
when it stabs its programmer so it can leave the house; the programmer is
simply an obstacle to be eliminated as per the telos of leaving the
house. The fear felt in watching The Terminator is likewise a reflection
of the fact that machines are devoid of conscience as they are fundamentally
different than us.
With regard to Lepore’s “artificial
state” of government by machine, the salient point that can be drawn is how much
of a role conscience, values, and judgment play in voting whether for a
candidate or a policy. It seems severely doubtful or even impossible that a human
activity can be exported to a machine, even if AI-capable, so Lepore’s prediction
of an artificial state by machine strikes me as a fantasy, to which I would retort,
let’s not get ahead of ourselves. We need not fear a cyborg assassin sent from
the future back to kill any of us any time soon, but we can realistically take
stock of how machines in our world differ fundamentally from us and even ask
which of our values are consistent and inconsistent with the operation
of such tools.
The astute among us might even
venture to reflect on how much we have come to act as if we were machines,
and furthermore which of our values (e.g., efficiency) are highlighted in such
a distortion of human nature. Even just in analyzing a toxic customer “service”
employee on the phone who robotically (i.e., rigidly) repeats, “Unfortunately,
the time line cannot be changed,” the sheer fakeness can be exposed as
machinelike rather than human. That employees could relish acting as machines
exposes the human will to power and the value put by some people on being in
control and even dominating other people. Rather than being a machine-value, the
will to power, as Nietzsche points out in his texts, is human, all too human.
What we take to be machine-automatic in terms of values may really be
reflections of our worst. I suspect that Lepore was projecting this outward as
if onto a white screen in portraying an utterly unrealistic and extreme future
of government by machines. A scholar should be able to cut through even one's own projections to grabble with that which from which they come. The science-fiction genre itself can be understood as a projection of the complexity of human nature.
2. Ibid.