Loyola University > Center for Digital Ethics & Policy > Research & Initiatives > Essays > Archive > 2019 > The Ethics of Digital Face-Swapping
The Ethics of Digital Face-Swapping
April 24. 2019
Digital face-swapping algorithms, or “deepfakes,” are the nascent domain of AI-powered technology that lets controllers supplant different faces on different bodies and put them in digital spaces where they don’t belong. Some deepfake manipulators have used the tech for an easy laugh, like putting the face of Nicholas Cage over Amy Adams in Superman. Others have used it to satisfy more prurient curiosities, like putting the face of Amy Adams over some other actress in a rote porno clip.
Such an instance of “non-consensual pornography” has generated hundreds of articles and essays about the broader ethics and legalities of deepfakes. Credible video and AI tests of prominent politicians mouthing off-script profanity and alarmist vagaries have emerged with a horrifying degree of realism, such as in this recent fake news PSA featuring Jordan Peele “playing” Barack Obama. Currently, few thinkpieces have really attempted to get at the heart of the ethical crises ignited by digital face-swapping: that ours is a culture dominated by the ethics of the visual. Deepfakes and other visually duplicitous technologies are the noxious discharge of an image-based socio-ethical paradigm motivated by pure aesthetics.
Within the contemporary annals of academic philosophy, this line of visually-focused inquiry is broadly known as “visual ethics,” which are important because - at their core - “visual stimuli affect individual behavior and organizations,” say Schwartz, et al., in Visual Ethics. As an emerging and interdisciplinary field, visual ethics integrates journalism, visual arts, cognitive science and philosophy to understand the characteristics of human interaction. While some strains of visual ethical introspection attempt to delineate the complications of images, photography and video in journalism, others focus on the moral genetics of visual culture, or as Elizabeth Bucar writes in her essay, “The Ethics of Visual Culture”: “Ethicists who study the visual will pose analytical questions (“why”) and make normative judgments (“ought”) as we explore the norms that visual culture relies on, reproduces, or critiques.”
Even as deepfakes pique blatantly obvious ethical issues, an analytical line of ethical questions will easily detail their advent as the inevitable output of a technologically advanced and digitally dynamized visual culture. Examples of fakes, imposters, and cons reach back to the earliest generations of human history, memorialized and mythologized in everything from Jewish scriptures (Jacob putting animal hair on his arms to be felt by his blind father, Isaac, and pass as a copy of his hirsute older brother Esau) and Italian divination (the magician card in the Tarot tradition is often a classic conman) to idiomatic English (wolf in sheep’s clothing). And like all archetypal notions, the con constantly vies for new modes of incarnation, which the digital ecosystem delivers with amoral and semi-autonomous efficacy.
In other words, deepfakes are simply the newest generation of moral apparition drawn from the historically duplicity innate in human communication. However, as suggested by the pied ethical reportage of digital face-swapping, deepfakes lack the charm of more classic cons, because they require no build-up; no cunning. We’re predisposed to trust the visual, because our ethicality (and our culture as such) is fundamentally visual. Like the fish that can’t comprehend water, we can’t practically comprehend an ethics based in either pure intuition or a priori abstraction. So when the visual deceives us - especially when that deceit is impelled through human agency - it’s easier for us to denude the con through policy or law than to pause and consider the root of the issue; to “see” the plank in our own eye, as it were.
For example, much of the intellectual response to deepfake rapidly turns towards technological fetishization, as in this 2018 report from Deloitte, one of the “Big Four” global consulting agencies, which posits that “many executives are concerned about the use of artificial intelligence (AI) to falsify images and videos” (...while at the same time saying, “but AI can also help detect them”).
Of the 1,100 executives interviewed in the Deloitte survey, “executives who understand AI best—early adopters—believe that the use of AI to create falsehoods is the top ethical risk posed by the technology.” With deepfakes in particular, the fear is that the technology could be used to put harmful or vulgar words into the mouths of otherwise straight-shooting business leaders as a form of modern corporate sabotage. Arsenic-laced Tylenol pills are no longer necessary for bringing down a business giant - only a visually credible video of a CEO dropping a racist vulgarity at a fundraising gala, for example, and the entire internet world will be storming the streets with pitchforks and torches, followed in short order by shareholders and investors. CEOs behaving badly already makes for contagiously viral internet fodder and a deepfake could work the same way.
But the solution, according to the Deloitte report, is more AI. This is a bit like the missionary colonists of early North America, who introduced European diseases to the indigenous people, then used European medicine to cure those diseases and call it the miracle of Christian civilization. A global business consultancy has an obvious incentive to frame the problem as purely technological, even while making blanket philosophical statements about human nature. But several others involved in the ethical conversation surrounding deepfakes, including those not motivated by selling IT, security and public relations consulting services, are too ready to suggest that technology will be the solution to the ethical problem caused by exactly the same technology. Which is not actually a solution, but merely a patch of tar on the hull of visual culture.
Google searches about the ethics of digital face-swapping largely point to the legal and technological implications of deepfakes, rather than the ethical basis of varied digital anxieties. Because it’s still an emerging technology, those most versed in its deployment tend towards the technical, still seeing deepfakes primarily as an example of AI, rather than a thing in itself. This is why, for example, lecturer and independent developer Alan Zucconi, waxes only very briefly on the ethics of deepfakes within his FakeApp tutorial and concludes, “the technique behind Deep Fake is, per se, neutral. Like a knife, Machine Learning is a tool can be used for good or evil.” As he and other developers see it, deepfakes aren’t the ethically questionable thing - only the “technique behind it.”
While this is a bit like blaming the problem of guns on the technique of combustion, it does enable the technology community to engage in constructive conversations about the possible non-malicious deployments of deepfakes (aside from detecting other deepfakes). Writer Gaurav Oberoi of the Allen Institute for AI suggests that the technology could be used for video production (e.g. placing deceased Hollywood stars in active roles, such as Carrie Fisher as Leia in the new Star Wars universe), social apps and even personalized advertising. It could also open up the door for licensing of an individual’s personal image, as well as expanding revenue streams for celebrities and advertising capabilities for marketers. Each of these recommendations contain a universe of nascent ethical and legal questions, which underscores the conceptual uncertainties of our modern digital reality (or irreality, more accurately).
These conceptual uncertainties are precisely what is belied by the ethical anxiety of deepfakes. We’re now at a point in our digital era that was once the realm of pure science fiction, including its naive amalgam of utopian hyperbole and soul-crushing paranoia. The modern news cycle is peppered with moments of state-funded propaganda, fake news and AI-generated reportage, while consumer markets are dominated by wearable devices, personalized advertising algorithms and vapid networks of influencers and digital marketers. It’s an ecosystem we already know to be easily exploitable, but the pace at which this system produces new technologies not only makes it impossible to be entirely risk averse - because there is no way any one single person can be exhaustively vigilant and constantly aware of all of their points of digital exposure - but also occludes clear ethical comprehension of pernicious technology.
It’s the responsibility of the ethicist - both academic and civilian - to continually analyze and judge, and to be scrupulously skeptical of seemingly simple ethical solutions to complex moral problems, especially within our visual-ethical paradigm. As Bucar phrases it in the conclusion to her essay, “any agent who is producing visual or material culture can successfully “rebel” only if she first has mastery of the visual field and its history. And since this rebellion occurs within the shared visual field, as opposed to outside it, it may prove to be subtle and thus successful.”
Benjamin van Loon is a writer, researcher, and communications professional living in Chicago, IL. He holds a master’s degree in communications and media from Northeastern Illinois University and bachelors degrees in English and philosophy from North Park University. Follow him on Twitter @benvanloon and view more of his work at benvanloon.com.