
Introduction
In recent years, artificial intelligence (AI) has developed rapidly, and a series of cutting-edge achievements have emerged, while ethics and morality have remained relatively stable and cannot be easily changed. When ethical norms fail to keep pace with the times, a gap will appear between technology and ethics, causing controversy. As one of the forms of new audio-visual media, online video is often used as a reliable and real media source or legal evidence because it can record the complete dynamic changes of people or things, and has higher credibility than pictures. AI face swap (or reface) video, as the name suggests, refers to an artificial intelligence synthesis technology that can realize face replacement in dynamic videos. However, reface videos often have the characteristic of “mixing the fake with the real”, which increases the difficulty for people to obtain real information, as well as the difficulty of information source selection and evidence judgment. At the same time, it also blurs the boundary between the virtual and the reality, the cognitive impression of “seeing is believing, but hearing could be deceiving” is broken. In the digital age, both static images and moving videos can be easily tampered with. Under the UGC production mode, human faces are seen as symbols, which are replaced, synthesized, abused, and spoofed, resulting in the proliferation of fake videos, and the information content is extremely deceptive. The public is surrounded by it, unable to distinguish the true from the false, and ultimately get lost in the interaction between virtual and reality.
This blog post will discuss a series of ethical disputes triggered by AI face swap technology, focusing on the boundary norms between AI technology and ethics. In 2017, a Reddit’s user uploaded a number of pornographic videos in which the characters’ faces were replaced with sexy Hollywood actresses. In 2019, an AI face swap app “ZAO” went viral overnight. However, due to privacy and security issues, it was ordered to be banned by the government department. This blog will use two cases of the social media user and the software development company to deeply analyze how both individuals and enterprises use AI technology to achieve different needs and purposes, including the unethical behavior they are involved in the process. After a brief entertainment carnival, we return to reason and begin to calmly think about the potential ethical issues behind artificial intelligence.
DeepFake

In this case, this blog examines how social media user use AI technology to create revenge porn, including the legal and ethical issues involved. In 2017, a user named “deepfake” used AI technology on the Reddit forum to create a “fake” pornographic video in which he grafted the face of Gal Gadot onto an porn video character, and which made him famous overnight. Subsequently, even though Reddit banned his account due to moral and public pressure, netizens still used his username “DeepFake” to name the face swap technology (Banks, 2018). In addition to “Wonder Woman” Gal Gadot, there are many more of his “works”, including Emma Watson, Maisie Williams, and Scarlett Johansson. In his “works”, the AI traces are obvious: using Google images and Youtube videos as materials, using open-source deep learning software such as TensorFlow and Keras to synthesize (Holliday, 2021). In fact, face swap technology is not a new word in the field of film production, but the AI synthesis technology in the previous film video was very complicated, and professional video editors and CGI experts needed to spend a lot of time and effort to complete the face swap video. The emergence of DeepFake can be said to be a breakthrough in AI technology. By using DeepFake technology, a fake reface video can be easily created with only one GPU and some training data (Holliday, 2021; Sanchez, 2018).
The real problem hidden here is that the use of open source AI architecture for face swapping technology is not too complicated or avant-garde, but too simple and easy. And it is the availability and low cost of this technology that has brought everyone into the ethical crisis. In the past, although such reface technology was superb, it was not worrying. Because this is the artistic need of the film, this technology requires a lot of labor and post-production costs, and requires professional video editing technology to achieve. However, let’s go back to the “deepfake” example mentioned earlier. He not only released his “works” for free, but also shared his own tutorial for making face-changing videos, as well as deep learning code and related data he wrote. According to his sharing, making a face swap video is very easy. Taking Gal Gadot’s video as an example, he only needs to collect Gal Gadot’s videos and pictures in the network gallery, and then form a material library; through the machine vision model provided on TensorFlow, let the model find pictures and videos that AI thinks fit in the material library, and finally replace the original video (Bode, Lees & Golding, 2021). Although it can be seen that the video he made is still flawed in many details, but generally no difference can be seen, and the production effect is also improving day by day.
Moreover, although portrait rights, copyrights, and reputation rights have been defined and regulated in many previous laws and regulations, due to the intervention of network intelligence factors, the ethics of AI has become complicated, and it has also made reality and virtual boundaries are blurred. The anonymity of the Internet makes the infringing subject hidden and more rampant; the mixed release of true and false videos makes it impossible for the audience to distinguish what is true and what is false. Obviously, the application of such technology to pornographic videos is itself an ethical deficiency. Due to weak legal and moral constraints, many innocent people (like those entertainers who appear in porn films for no reason) can be framed and troubled at will; as the boundaries between reality and virtual begin to blur, “fake videos” are fueling fake news dissemination of the video, thereby greatly undermining the credibility of the video as evidence. According to the Asilomar Principles of Artificial Intelligence, it states that super-intelligence and technology are developed to serve widely recognized ethical concepts and to serve the interests of all humanity rather than the interests of a nation, organization, or individual (Future of Life Institute, as cited in Sterling, 2018). This kind of behavior of abusing face swap technology to produce and spread revenge fake pornography for private interests and to gain attention makes AI play a very serious negative role in an unforeseen way.
ZAO

In this example, this blog starts to focus and reflect more on how organizations and businesses are using “unethical” AI technologies to gain benefits and deprive the public of their legitimate rights and interests. At the end of August 2019, a mobile device app called “ZAO” quickly aroused enthusiastic responses on Chinese social media platforms, and for a time it topped the app download list. With the help of ZAO, users only need to take a selfie with the mobile device; after verification, they can use AI technology to transform into the film character or the famous star. However, many people were awakened by an apology statement from the company before they download it. Because the platform’s user agreement hides the “overlord clause” about portrait rights, licensing rights and reprint rights; three days after the app was launched, ZAO was interviewed by the Ministry of Industry and Information Technology (MIIT) and ordered to close down for related rectification (Doffman, 2019). From a popular app to being banned from the entire network, ZAO only survived for 120 hours. After a short entertainment carnival, public also began to regain their rationality and gradually realized the hidden security risks behind ZAO.
Actually, it is not uncommon for applications that use AI technology to provide services. From the FaceU that adds sticker effects to photos, to the online shopping mall provided by Amazon, and then to the AI face swap. The use of AI technology has greatly enriched the digital experience of users. However, the controversy caused by ZAO is by no means limited to the field of artificial intelligence applications. The doubts about this product profoundly reflect the thinking on digital ethics and the focus on privacy protection in the Internet age. For example, ZAO’s user agreement states that “ZAO and its affiliates can choose whether to use and how to use upload content, including but not limited to using and disseminating the content on the company’s service platform, and re-editing the content for use.” (Shao & Cheng, 2019) Which means that after the user agrees to the agreement, it is equivalent to giving ZAO the right to use the portrait (including pictures and videos) anywhere; and these materials may be used by ZAO for other commercial purposes. At the same time, another agreement writes “Agree to grant ZAO and its affiliates the right to be completely free, irrevocable, perpetual, sublicensable worldwide.” (Shao & Cheng, 2019) Which means that any data or information provided to the app may remain permanently on the Internet.
In the digital era, technology and data suppliers have strong technical advantages and information asymmetry advantages; while users are mostly ignorant and disadvantaged groups. The information and usage behaviors of users on the Internet have always been in an embarrassing situation of being automatically acquired, collected and stored. Therefore, big data privacy and the Right to be forgotten (RTBF) have become very important issues in digital ethics. Audiences are more and more accustomed to enjoying rich, interesting and creative products and applications, and are accustomed to sharing their daily life on social media, but this information is also permanently stored on the server, making it nearly impossible for users to maintain their own data privacy. Behind this, the huge commercial benefits brought by the use of users’ personal information have become a booster for service providers to store, use, and sell information. The Right to be forgotten is an important right for people to choose and process personal information in the context of big data. In the Internet age, it is almost impossible to be completely forgotten, but users with the Right to be forgotten can choose to seal up outdated information that is unwilling to be discovered by others (Walker, 2017). In “Delete“, the author advocated the “Internet Forgetting Movement” in the big digital age, aiming to help more global villagers to build a safe, positive and beautiful future life (Schönberger, as cited in Walker, 2017, p.273). The European Union took the lead in adding the “Right to erasure”, giving people the right to delete digital information that is insufficient, irrelevant or outdated and no longer relevant, helping people escape from the embarrassing past (Kelion, 2019). Obviously, ZAO challenged the Right to be forgotten and put the challenge on the stage. ZAO’s “overlord clause” undoubtedly leaves users with huge security risks. Maybe one day you are walking on the road or surfing the Internet and suddenly find that your photos or videos appear in commercial advertisements or materials, but you don’t know how protect your rights, because before you know it, you have granted ZAO the right to use your information.
Calm thinking behind the AI craze — Ethics depends on human, not technology
Whether it is deepfake that use celebrity faces to make pornographic videos, or ZAO intends to violate users’ digital rights through word games. It’s not hard to see that the reason for the ethical controversy caused by face swap videos is not the technology itself, but the people who use the technology. As the CEO of Naughty America said “DeepFake don’t hurt people, people using deepfakes hurt people.” (Hronopoulos, as cited in Snow, 2018) Technology is a double-edged sword. The development of technology will bring new positive changes to society, as well as hidden dangers and threats. New technologies can neither be rudely contained nor allowed to be abused freely. Therefore, AI face swap technology should not be banned across the board, and what needs to be done is to research relevant laws and ethical norms as soon as possible to regulate technology users and communicators.
Conclusion
In conclusion, this blog focuses on how individuals and businesses use AI technology to achieve different needs and purposes through the cases of deepfake and ZAO; and examines the numerous ethical issues involved in the process. When AI technology becomes a revenge tool for individual users to satisfy personal psychological stimulation, when the Internet becomes an accomplice for more people to escape moral judgment, and when false information becomes dynamic and video-based, the public will gradually get lost in the interaction between virtual and reality. On the other hand, when organizations and businesses take advantage of legal loopholes to “legalize” the storage and sale of users’ digital data, users will forever lose the right to be forgotten. Digitization and networking have increasingly become an inseparable part of the living conditions of contemporary people. However, where is the boundary between technology and ethics? There is no standard answer to this question. Finding the answer is a process in which all parties involved in the construction of the Internet living space continue to argue, make concessions, and finally reach a consensus. And Internet residents complete their self-education through ethical discussions again and again. After the basic consensus is reached, more appropriate legislation and continuous supervision are needed to build a more dignified digital living space. Overall, in order for AI technology to be used reasonably and legally by more people, institutional arrangements need to be made on the basis of widespread application. In this process, the government, scientific groups, social organizations, market enterprises and the public should perform their respective duties and do their best, participate in governance with appropriate and reasonable roles, and strive to build an all-round ethical governance model with the participation of multiple subjects.
Reference Lists:
- Banks, A. (2018). What Are Deepfakes & Why the Future of Porn is Terrifying. Retrieved from https://www.highsnobiety.com/p/what-are-deepfakes-ai-porn/
- Holliday, C. (2021). Rewriting the stars: Surface tensions and gender troubles in the online media production of digital deepfakes. Convergence: The International Journal of Research into New Media Technologies, 27 (4), 899-918. doi:10.1177/13548565211029412
- Sanchez, J. (2018, February 8). Thanks to AI, the future of ‘fake news’ is being pioneered in homemade porn. NBC News. Retrieved from https://www.nbcnews.com/think/opinion/thanks-ai-future-fake-news-may-be-easily-faked-video-ncna845726
- Bode, L., Lees, D., & Golding, D. (2021). The Digital Face and Deepfakes on Screen. Convergence: The International Journal of Research into New Media Technologies, 27 (4), 849–854. doi:10.1177/13548565211034044
- Sterling, B. (2018 ). The Asilomar AI Principles. Retrieved from https://www.wired.com/beyond-the-beyond/2018/06/asilomar-ai-principles/
- Doffman, Z. (2019). Chinese Deepfake App ZAO Goes Viral, Privacy Of Millions ‘At Risk’. Retrieved from https://www.forbes.com/sites/zakdoffman/2019/09/02/chinese-best-ever-deepfake-app-zao-sparks-huge-faceapp-like-privacy-storm/?sh=cecde8384700
- Shao, G., & Cheng, E. (2019, September 4). CNBC News. Retrieved from https://www.cnbc.com/2019/09/04/chinese-face-swapping-app-zao-takes-dangers-of-deepfake-to-the-masses.html
- Walker, RK. (2017). Note: The Right to Be Forgotten. Hastings Law, 64(1), 264-275. doi:https://doi.org/10.1017/glj.2020.14
- Kelion, L. (2019, September 24). Google wins landmark right to be forgotten case. BBC News. Retrieved from https://www.bbc.com/news/technology-49808208
- Snow, J. (2018). An adult film company wants to put users into deepfake porn. Retrieved from https://www.fastcompany.com/90221476/an-adult-film-company-is-putting-users-into-porn-with-a-deepfake-tool
Picture References:
- Shaw, J. (2019). Artificial Intelligence and Ethics. Harvard Magazine, Cambridge, MA, United States. https://www.harvardmagazine.com/2019/01/artificial-intelligence-limitations
- Strickland, E. (2019). Facebook AI Launches Its Deepfake Detection Challenge. IEEE Specturm, New York City, NY, United States. https://spectrum.ieee.org/facebook-ai-launches-its-deepfake-detection-challenge#toggle-gdpr
- Daniels, A. (2019). Giving Your Selfie to This Chinese App Is a Really Bad Idea. Popular Mechanics Magazine, New York City, NY, United States. https://www.popularmechanics.com/technology/security/a28898372/zao-deepfake-app-privacy-risks/