Spoiler Alert: These essays are ideally to be read after viewing the respective films.

Monday, October 21, 2024

Eternal You

The documentary, Eternal You (2024), is one film that zeros in on the use of AI to contact loved ones who have died. As the marketing departments of the tech companies providing these products say, AI can deliver on what religion has only promised: to talk with people beyond the grave. Lest secular potential buyers be left out, AI can provide us with “a new form of transcendence.” Nevermind that the word, transcendence, like divinity and evil, is an inherently religious word. Nevermind, moreover, that the product is actually only a computer simulation of a person, rather than the actual person direct from heaven or hell. The marketing is thus misleading. In the film, a woman asks her dead husband if he is in heaven. “I’m in hell with the other addicts,” he answers. She is hysterical. Even though people who write computer algorithms cannot be expected to anticipate every possible question that AI could be asked and every response that it could give, government regulation keeping the marketing honest and accurate can significantly reduce the risk that is from AI’s use of inference (inductive) and probability that are beyond our control to predict and even understand.

The AI products in question do not include a conscious intelligence; for such to be the case, we would need to understand human consciousness, which lies beyond human cognition. It is important not to go too far in projecting an actual person, especially if one is dead, onto the product. To be sure, the lapse is easy to lapse into, for the product draws on a treasure-trove of archival data; in fact, only a little from the person’s emails, recorded phone calls, and texts is needed for such an algorithm to be able to make incredible inferences based on probability by drawing on all the general data-base. The effect can be stunning to the person using the product, but even if it seems like it really is the dead person speaking and writing, it is crucial to keep remembering that even the most striking likenesses are simulations. Even if neuroscientists figure out consciousness in the human brain and coders can simulate that in algorithms, the emergent AI consciousness of the person is not really of the person.

AI does not in fact deliver on the promises of some religions regarding being united with our loved ones in heaven (or hell). This is crucial to keep in mind when a simulation of a dead spouse writes, “I’m in hell along with the other addicts” because the algorithm has inferred based on probability being applied to the relevant data that drug addicts probably go to hell. None of the data that an algorithm can draw on contains a report of hell or heaven existing and that souls of dead people are in one or the other, so a simulation’s judgment should be taken with a grain of salt (i.e., not taken as a fact of reason).

Therefore, asking about the afterlife should automatically generate a statement from the algorithm’s coder to the effect that the actual person is not in contact. Even though a person who is still living can generate a digital “footprint” that can be used by an AI algorithm by one’s loved ones after the person has died, everything in that footprint is on the living side of a life/afterlife dichotomy.

To be sure, there is value in descendants being able to hear the cadence and vocal tone of a long-deceased parent, grandparent, or great grandparent. That voice could inform on the deceased life, religious beliefs, political positions, and more. Used this way, AI represents a new way of remembering and knowing a person who has died. A religiously devout person like the sister of a dead man covered in the film might still say that there is something not right about recreating the soul of someone whose soul is (presumably) in heaven. But such a critic has lapsed into assuming that the actual person who is dead is talking or writing in the simulation.

Likewise, there is value in using the AI products to help grieving people let go of the dead person and move on. But for this to be effective, the algorithms would need to be such that the grieving person is not stuck in the grieving process as a result. There is thus a need for AI companies offering such a product to consult with psychologists. The experience of a user of the product is of course going to be emotional, even if the user knows intellectually that the product is really just a simulation. At the very least, we would expect the managers to want to reduce any potential liabilities; buyer beware on such a product would not hold up in court, especially if the marketing is promoting being able to speak with a loved one beyond the grave.

Therefore, it is vital that AI companies offering such products are not allowed by law to claim, “You can talk with your deceased loved one!” Perhaps those companies should also be required to send customers a picture of Batman taking a card from a computer in the Batcave to read.