The Risk of Facial Recognition

The Risk of Facial Recognition

– through the case of Hangzhou Wildlife World Zoo

On April 27, 2019, Guo Bing purchased an annual card from the Hangzhou Wildlife World Zoo in China, offered relevant personal identity information, including his fingerprints and portrait. Later, the zoo adjusted the admission process to enter the zoo from using fingerprint to facial recognition, then sent a text message to Guo Bing to notify relevant matters and ask him to activate his facial information. Guo Bing believes that personal biometric information such as facial features belongs to sensitive personal information, once it is exposed, it is easy to be abused, which will endanger his personal and property safety. Guo Bing wanted to have his photo deleted and asked for a refund, which was rejected by the zoo. So Guo Bing filed a lawsuit with the people’s Court of Fuyang District, Hangzhou. The case is considered as the first case of facial recognition in Chinese judicial field. Personal facial information cannot be changed and can be used to directly recognize identity, its use is strongly related to the personal rights and the safety of property, compared with general personal information, personal facial information should be treated with more caution. Some people believe that this event represents the awakening of Chinese users’ awareness of data privacy, which is the inevitable result of previous businesses and institutions that have always collected a large number of users’ information and their management is not standardized. They believe that this case is directly related to the previous incidents of some companies divulging and selling a large number of users’ personal information (Wang, 2020). After four months of trial, the judgment was publicly pronounced on April 9, 2021. After hearing, the court held that Guo Bing independently made the decision to apply for the annual card and provided relevant personal information including photos when he knew the contents of the contract including recording fingerprints, “but the zoo did not inform Guo Bing that the purpose of the photo collection is facial recognition , though Guo Bing agreed to take photos, it should not be regarded as the consent of recording facial information. The zoo wants to process the collected photos of Guo Bing as facial recognition information, which exceeds the purpose of collecting the above information as informed, it violates the legitimacy principle of personal information processing. Therefore, the facial feature information including photos provided by Guo Bing should be deleted.

The digitization, informatization and intellectualization process of the society, make the problem of personal privacy more and more significant. Various new technologies represented by face recognition technology have been rapidly promoted in people’s general anxiety of weak protection of their privacy and forced privacy exchange. Privacy in digital society presents a new feature, which is the informatization of privacy, the typical form of privacy and other personal information in this era is digital. Opening online shopping websites, the goods on the first page happen to be what you just googled; After registering e-members, you will receive greetings on your birthday. However, this convenience comes at the cost of personal information. Excessive collection of personal information by e-commerce and social media platforms is becoming a new social problem (Karppinen, 2017). In this information era, people’s daily behavior is becoming more and more digital, and everyone will produce a lot of data every day, which may be recorded without the acknowledgement of the institutions. For example, in China, some public toilets need facial recognition in the name of limiting the use of paper towels. Most people relax their vigilance because they only need to take a picture in the whole process and do not need any other operation. In addition, they are usually in a hurry, and inadvertently leak the facial recognition information.

With the deep integration of culture and technology, the widely use of big data, facial recognition, QR code and other technologies has facilitated people’s work and life. The development of these new services and the use of intelligent devices have faithfully recorded people’s various activity tracks in real life and the virtual space of network. The user’s privacy content is no longer limited to the traditional single category (name, gender, age, educational background, contact information, etc.), (Friedewald et al., 2017) and the border of user’s privacy content has been continuously expanded. Like in December 12, 2019, the US Fortune magazine reported that an artificial intelligence company in Santiago used high quality 3-D masks and photos to deceive the world’s facial recognition systems, including China’s WeChat and Alipay, and completed the shopping payment process (Jeff, 2019). The risk of personal privacy disclosure is increasing. Companies and platforms have collected a large number of user data, which are faced with risk factors such as mismanagement, hacker attacks, system vulnerabilities, personal operation errors or third-party secret collection of information, (Goggin, 2017) which increases the risk of user privacy disclosure. Big data and artificial intelligence technology can capture the mapping relationship between massive data and public behavior. (Levy, 2018) With the help of big data and intelligent analysis technology, data collectors can deeply analyze the user’s characteristics, behavior preferences, health status, social relations, geographical location, secret behavior and other sensitive personal information, so that human behavior and user privacy preferences are likely to be accurately predicted, resulting in a significant increase in the risk of user privacy disclosure. For example, Facebook faced allegations in 2018 about making profits from user information. Facebook sells the user information collected in five years to advertising companies so that they can accurately advertise to a specific group (Terry, 2018). As Berners, the inventor of the world wide web, worried, we’ve lost control of our personal data, some Internet services do not charge us, but when we read their policy rules, we will realize that we are actually paying – not money, but our personal information. (The Guardian, 2017)

Facial recognition data is different from other private data, facial data belongs to sensitive biometric recognition data, including race and other sensitive information. It can identify people’s identity, and can be easily connected with other information (Garvie, 2016). At the same time, it is non-renewable, permanent, and unique. Article 9 of the EU general data protection regulation (GDPR) clearly stipulates that “the processing of personal genetic data and biometric data for the purpose of identifying the identity of natural persons is prohibited”, except for special circumstances, such as the express consent of the data subject. However, the particularity of facial recognition technology in public places violates the principle of express consent, and it is difficult to determine the effectiveness of consent and the necessity of use, which poses an important challenge to the protection of personal privacy that the EU has always emphasized. Although facial recognition is a biometric technology, it is different from other biometric technologies such as fingerprint recognition and iris recognition. Fingerprint and iris recognition technology must be actively carried out by the subject in any situation to complete information collection, which is a compulsory authentication method; but the utilizing of facial recognition can be generally divided into two categories: the facial recognition in airport and before online payment, which people are well known about the authentication, and it is mandatory in safety; Second, facial recognition in some public places do not require a mandatory authentication, which can complete the collection of facial information without people’s knowledge, such as video surveillance. (Gies, 2020) The EU’s official regulatory dynamics prohibit the second situations, which do not have a mandatory certification process of facial recognition. In addition, facial recognition in public places, even when the subject knows it, is difficult to obtain the express consent of the subject, and it is more difficult to determine the effectiveness of consent. Whether the public’s consent is made out of independent in this kind of situation is controversial (Selinger, 2020).

According to EurActiv’s report on January 17, 2020, the draft on artificial intelligence to be officially released by the European Commission in February, the commission is considering banning the use of facial recognition technology in public places in the next three to five years (Stolton, 2020). According to the BBC, the EU introduced the ban to give regulators time to study how to prevent the technology from being abused. The ban comes as politicians and campaigners in Britain have called for the police to stop using real-time facial recognition for public surveillance. The technology allows faces captured by closed-circuit television to be checked in real time with observation lists compiled by the police. Campaigners claim that some places use facial recognition technology without informing the public, which is inaccurate and invasive, violates individuals’ privacy and may exacerbate identity fraud (BBC, 2020). The ban once aroused widespread concern and great controversy. Microsoft and Google hold different views. Sundar Pichai, CEO of alphabet and Google, said the temporary ban was feasible. Brad Smith, chief legal officer of Microsoft, believes that such intervention should not be carried out. According to Reuters, Pichai said at a meeting: “maybe there can be a waiting period before we really consider how to use it”, and believed that “the government and the law should solve this problem as soon as possible and formulate a framework for this.” However, Smith said in an interview that a scalpel can solve the problem in a way that enables good things to get done and bad things to stop happening, but not a simple ban. “This is a young technology and it will get better. But the only way to make it better is to continue to develop it, and the only way to continue to develop it is to let more people use it.” (Vincent, 2020) Later, In the “white paper on artificial intelligence” officially released, the EU finally deleted the ban and encouraged Member States to develop their own facial recognition specifications. At the same time, it suggested that independent groups should evaluate each proposed public use of the technology.

The protection of facial recognition mainly involves the measurement of the relationship between two subjects, one is the information processor and the other is the information owner. Information owners are increasingly demanding for the protection of personal facial information, while information processors expect a relaxed information processing environment. For the needs of national security and social public interests, the state will also process the personal information of social subjects, such as the “Skynet” system of the Chinese public security department, national counter-terrorism department and identity recognition in civil aviation. As the processor of personal facial information, other companies mainly focus on accurately grasping the market demand, improving market share and enhancing economic benefits. Although facial recognition has a potential risk of eliminating people’s privacy, it has a strong benefits in security, especially national security (Gies, 2020). As one of personal sensitive information, the protection and utilization of personal facial information needs to measure the risks and benefits comprehensively and accurately. Personal facial information should be used on the premise of reasonable protection, and the protection measures should be continuously improved in the process of utilization.

 

Reference

BBC. (2020). Facial Recognition: EU Considers Ban of up to Five Years[EB/OL].(2020-01-17)[2021-09-13].https://www.bbc.com/news/technology-51148501.

Friedewald, M., Burgess, J. P., Čas, J., Bellanova, R. & Peissl, W. (eds). (2017). Surveillance, privacy and security: Citizens’ Perspectives. London: Routledge.

Garvie, C., & Frankle, J. (2016). Facial-recognition software might have a racial bias problem. The Atlantic, 7.

Gies, W., Overby, J., Saraceno, N., Frome, J., York, E., & Salman, A. (2020, April). Restricting Data Sharing and Collection of Facial Recognition Data by the Consent of the User: A Systems Analysis. In 2020 Systems and Information Engineering Design Symposium (SIEDS) (pp. 1-6). IEEE.

Goggin. (2017). Digital Rights in Australia. The University of Sydney. https://ses.library.usyd.edu.au/handle/2123/17587

Jeff, J. R. (2019). Airport and Payment Facial Recognition Systems Fooled by Masks and Photos, Raising Security Concerns. US Fortune. https://fortune.com/2019/12/12/airport-bank-facial-recognition-systems-fooled/

Karppinen. (2017). Human rights and the digital. In H. Tumber & S. Waisbord (Eds.), The Routledge Companion to Media and Human Rights (pp. 95–103). https://doi.org/10.4324/9781315619835

Levy, K., & Barocas, S. (2018). Refractive surveillance: Monitoring customers to manage workers. International Journal of Communication, 14, this Special Section.

Selinger, E., & Hartzog, W. (2020). The inconsentability of facial surveillance. Loy. L. Rev., 66, 33.

Stolton, S. (2020). LEAK: Commission Considers FacialRecognition Ban in AI White Paper[EB/OL].(2020-01-17)[2021-09-13]. https://www.euractiv.com/section/digital/news/leak-commission-considers-facialrecognition-ban-in-ai-white-paper/.

Terry Flew. (2019). Platforms on Trial. Intermedia, 46(2), 18–23. https://eprints.qut.edu.au/120461/

The Guardian. (2017). Tim Berners-Lee: I invented the web. Here are three things we need to change to save it. https://www.theguardian.com/technology/2017/mar/11/tim-berners-lee-web-inventor-save-internet

Vincent, J. (2020) .Google Favors Temporary Facial Recognition Ban as Microsoft Pushes Back[EB/OL].(2020-01-21) [2021-09-13]. https://www.theverge.com/2020/1/21/21075001/facial-recognition-ban-googlemicrosoft-eu-sundar-pichai-brad-smith.

Wang, J. (2020). Privacy reconstruction in Digital Society — Taking “face recognition” as an example. Exploration and contention (2), 6.