With the advent of the digital age, more and more technologies and applications appear in the market. They are constantly enriching people’s lives and bringing new entertainment experiences. A face-changing software called ZAO uses deepfake technology and has become popular on the Internet. With the App, users can turn on the camera and hand over their facial images to replace other people’s faces in the video with their faces. While people enjoy the dream of being an actor, what is most likely to be ignored are the potential risks and problems. Therefore, this article will take ZAO’s deepfake technology as a case to discuss its issues and the possible solutions in the future.
ZAO and deepfake
Deepfake has a long history and is closely related to media practice. According to Fikse (2018), the word “deepfake” originally came from a Reddit username. This person replaced faces in the pornographic video with many famous actresses’ faces, which caused a sensation on the Internet. Although there is no unified concept of deepfake in academia, most scholars believe that deepfake is related to trusted media and deep neural networks. (Mirsky and Lee, 2022). A typical example of deep technology is video falsifying. We also commonly call this kind of video falsifying technology AI face exchange. The essential goal of deepfake technology is to take the target face instead of the original natural object’s face by generating algorithms such as countermeasure networks or convolutional neural networks (Yang et al., 2019). Specifically speaking, deepfake technology first disassembles original material video into a large number of pictures frame by frame, then performs unified face changing processing on these pictures, and finally recombines the replaced static pictures into dynamic fake video (Albahar & Almalki, 2019). More ideally, the technological development in the field of deep learning has promoted work efficiency in recent years (Géron, 2017). For example, more and more programmers gradually apply technologies such as automatic encoders and the generation of countermeasure networks into new machine learning model designs (de Seta, 2021). To increase interactivity and remain attractive to users, some famous technology companies including Apple and Bytedance have widely used deepfake technology on their digital products (Gershgorn, 2020).
As de Seta (2021) considers, the popularity of a deepfake application depends on whether it is friendly and easy to operate for users. ZAO is undoubtedly a leader in this field. Compared with other traditional face-changing software, it does not need advanced operation skills and research time. ZAO provides users with selected preset film and television clip templates. After rapid face recognition and authentication, users only need to upload their photos and become face change experts. They can also share these face-changed videos to other platforms, causing viral transmission. The advent of ZAO means that advanced digital technology is digitalising everything and rewriting our features and identities. Face-changing software obtains many users’ personal information, further separates voice, face and consciousness, and finally integrates with other people’s identity characteristics to reconstruct identity characteristics. At the same time, potential risks are quietly coming. For example, photos and videos used to be regarded as reliable evidence of the truth. However, deepfake technology has created false images and videos, which try to break the boundary of trust. The face-changing market is still in a grey area of supervision, which commercialises people’s facial recognition information. In addition, there are threats to privacy in using ZAO software, such as threats caused by privacy disclosure and privacy abuse.

Image. Screenshot from ZAO download page in App Store
Threats to digital privacy
First of all, the ZAO app may over-collect privacy. ZAO’s face changing function requires users to upload at least one selfie and complete the scoring of photo quality according to clarity, action and camera angle. In order to ensure that a user is a real person, Zao also requires the user to turn on the camera to accept a simple expression and action checks, such as blinking and turning his head. ZAO seems to have successfully obtained the user’s biological information in this process. Before registering, ZAO will first ask users to sign the user agreement and privacy agreement if they want to use the software. According to ZAO privacy agreement (2022), mobile phone users’ private information includes ID number, identity card number, communication contact mode, communication record, and mobile album permission. These pieces of information together depict the real identity of users. However, they are not incredibly related to the operation of the software’s core functions. What users rarely think about is whether this information is essential. In other words, users voluntarily or involuntarily hand over their personal privacy information. ZAO can easily depict accurate user portraits through information integration, who the user is in reality, their contact information, and their appearances.
Privacy information disclosure may be the most direct threat to digital privacy. According to the ZAO privacy agreement (2022), it has the right to share the obtained user information with other cooperative third-party software. ZAO may not be the only one who knows your private information in this process. It commercialises your personal information and obtains benefits by exchanging it with a third party. For example, more spam pushes may disturb users’ personal lives. At the same time, the widespread of private information may threaten property security, especially when mobile payment is booming. According to Liu et al. (2021), while the application scenario of facial recognition payment service is gradually expanding in daily life, the security of biometric information is becoming more critical. On this premise, on the one hand, we are worried about whether the platform can provide sufficient protection for our personal information. On the other hand, we are concerned that deepfake technology one day become mature enough to deceive the face recognition system of some payment software. As a result, it could lead to the theft of our digital wallets.
Although the platform tries to verify whether the user is himself, the verification method is too simple. Intentional criminals can still overcome it. The threat of abuse of private information may still exist. Any one of us may involuntarily play the leading role in the video. The essence of deepfake is to induce and deceive others (Levine, 2019). Hancock & Bailenson (2021) pointed out that video has high information carrying potential due to the dominance of the visual system. We tend to take what we see as the truth in the past. However, the deepfake proves that videos can be human-made. People cannot fully believe what they see, becoming more confused about the truth. The substantial harm of deepfake is that by forcibly disclosing private information, involuntary victims are doing or saying what they did not do. In addition, privacy can be continuously copied and shared across platforms. The negative impact of deepfake on the target object will become more extensive and lasting. The manipulation of the face of deepfake can be used for bullying, revenge, pornography, political destruction, video evidence, blackmail and other purposes (Albahar & Almalki, 2019). Due to the powerful persuasion of the visual system, if people can not change the traditional belief that depends on sensory perception, they could have an irreparable negative impact on the victim’s life. In these cases, the right to privacy is linked to other broader rights. For example, some videos may lead to negative images of victims and harm their reputation and dignity. Some celebrities are used for political face changes, which may undermine the democracy of digital Politics (Albahar & Almalki, 2019; Reid, 2021). Therefore, when we talk about privacy risks in the digital age, we should consider both direct and indirect threats.
In addition, the issue of accountability after privacy issues is also an important point. According to the ZAO user agreement (2022), the user is responsible for maintaining the personal account, password and confidentiality and bearing all legal liabilities for the registered account. At the same time, the agreement also mentioned that the password appeal mechanism only needs to identify that the information filled in by the appeal is consistent with the system record data. ZAO can not confirm whether the complainant is an authorized user of the account. This point means that ZAO does not assume platform responsibility for these privacy risks and shifts the problem to the impostor. Moreover, ZAO video templates may violate the privacy of the original author because most templates are uploaded spontaneously by users. The content of many videos is related to the personal life of copyright owners, and it is difficult to judge whether to allow public dissemination. In this regard, ZAO did not put forward the corresponding liability and infringement disposal methods in the agreement.
Future countermeasures
With advanced technology development, existing laws may not be enough to provide adequate defence for privacy victims. After studying many privacy-related cases of deepfake, Reid (2021) found that most cases need to be associated with other laws, such as reputation law and publicity law, to conduct supplementary defence. In particular, personal biological information is unique and permanent. Its sensitivity and use-value may have a more significant impact than general privacy information. Therefore, the privacy law for advanced technologies should be continuously added in the future. A standardised supervision mechanism should also be established for those advanced technologies industries.
From the case study of ZAO, in order to improve the protection of users’ privacy, the platform needs to more effectively implement the responsibility of supervision. If the digital platform can strengthen the supervision and protection of privacy information disclosure, the security of user privacy will be improved. As Lewin (1947) said, there are gatekeepers in the flow channels of information. They review and check the information. Only the information that meets the group norms or its value standards can be put into the communication channels. On the other hand, in the face of deceptive technology such as deepfake, the development of recognition and cracking technology is also very important in preventing losses caused by facial information theft.
Privacy calculus is closely related to personal decisions. It assesses the seesaw between the benefits of personal information disclosure and the risks of information disclosure (Culnan & Armstrong, 1999). This method is also applicable to analyse economic transaction value and competitive factors of personal information in the digital age. Liu et al. (2021) refered that perceived privacy risk and return, as two vital opposing variables, act a vital role in privacy calculus. As usage scenarios of personal biometric data in daily life become more and more extensive, its sensitivity is becoming more and more prominent. People may become increasingly anxious about personal privacy. Therefore, they may conduct privacy calculus with a more rigorous attitude before decision making. Therefore, facing the threat of privacy in the digital age, media users should improve their awareness of privacy protection and decide whether to use high-risk technology applications such as ZAO and disclose personal privacy under appropriate circumstances.
Taking ZAO and deepfake technology as examples, this article discusses the issues of concern in digital privacy. Personal information and biological information are related to constructing a real identity. Any element threatened may affect the security of users. The leakage of digital privacy may cause problems such as disturbing users’ real-life and property theft. At the same time, the infringement of confidentiality may also be related to other personality rights, and the risks associated with privacy may be direct or indirect. In addition, online accountability for privacy remains a problematic issue. In the face of these current situations, we still need to constantly pay attention to and think about Countermeasures in the future. Relevant laws and industry norms need to be continuously improved in the future. The platform should more actively assume supervising and protecting users’ privacy. Users also need to improve their awareness of digital privacy evaluation and protection from the root.
Reference list:
Albahar, M., & Almalki, J. (2019). Deepfakes: Threats and countermeasures systematic review. Journal of Theoretical and Applied Information Technology, 97(22), 3242-3250.
Culnan, M. J., & Armstrong, P. K. (1999). Information Privacy Concerns, Procedural Fairness, and Impersonal Trust: An Empirical Investigation. Organization Science (Providence, R.I.), 10(1), 104–115. https://doi.org/10.1287/orsc.10.1.104
de Seta, G. (2021). Huanlian, or changing faces: Deepfakes on Chinese digital media platforms. Convergence (London, England), 27(4), 935–953. https://doi.org/10.1177/13548565211030185
Fikse, T. D. (2018). Imagining deceptive Deepfakes: an ethnographic exploration of fake videos (Master’s thesis).
Géron, A. (2017). Hands-on machine learning with Scikit-Learn and TensorFlow: concepts, tools, and techniques to build intelligent systems (First edition.). Sebastopol, California: O’Reilly Media, Inc.
Gershgorn, D. (2020). We Feared Deepfakes. Then Tech Monetized Them. Medium. Retrieved 5 April 2022, from https://onezero.medium.com/we-feared-deepfakes-then-tech-monetized-them-736746626d9c
Hancock, J. T., & Bailenson, J. N. (2021). The Social Impact of Deepfakes. Cyberpsychology, Behavior and Social Networking, 24(3), 149–152. https://doi.org/10.1089/cyber.2021.29208.jth
Levine, T. R. (2019). Duped: Truth-Default Theory and the Social Science of Lying and Deception. The University of Alabama Press.
Lewin, K. (1947). Frontiers in Group Dynamics: II. Channels of Group Life; Social Planning and Action Research. Human Relations (New York), 1(2), 143–153. https://doi.org/10.1177/001872674700100201
Liu, Y., Yan, W., & Hu, B. (2021). Resistance to facial recognition payment in China: The influence of privacy-related factors. Telecommunications Policy, 45(5), 1–. https://doi.org/10.1016/j.telpol.2021.102155
Mirsky, Y., & Lee, W. (2022). The Creation and Detection of Deepfakes: A Survey. ACM Computing Surveys, 54(1), 1–41. https://doi.org/10.1145/3425780
Reid, S. (2021). THE DEEPFAKE DILEMMA: RECONCILING PRIVACY AND FIRST AMENDMENT PROTECTIONS. University of Pennsylvania Journal of Constitutional Law, 23(1), 209–.
Yang, X., Li, Y., & Lyu, S. (2019). Exposing Deep Fakes Using Inconsistent Head Poses. ICASSP 2019 – 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 8261–8265. IEEE. https://doi.org/10.1109/ICASSP.2019.8683164
ZAO user agreement. (2022). Retrieved 5 April 2022, from https://h5.ai-indestry.com/fep/momozao/static-pages/protocol-new.html?name=eula
ZAO privacy agreement. (2022). Retrieved 5 April 2022, from https://h5.ai-indestry.com/fep/momozao/static-pages/protocol-new.html?name=privacy