The dilemma of privacy threat in the era of Artificial Intelligence

Introduction

In the era of Artificial Intelligence (AI) , the widespread use of intelligence applications makes us live in data and algorithms every day. Meanwhile, the Internet, big data, and other related technologies have accelerated the pace of AI applications, improving the quality of our life (Bernes, 2021). However, the development of technology brings opportunities to society simultaneously, and we cannot ignore its disadvantages and a series of negative impacts. Especially in today’s era of transparency without privacy, as we cede some of our rights and interests in exchange for the convenience of intelligent applications, the potential misuse of technology is posing an unprecedented threat to the privacy of sensitive personal information.

Based on the above background, this paper first expounds on the root causes of privacy threats in AI applications from two dimensions. It then lists three main types of privacy infringement in AI applications to illustrate the enormous risks caused by the abuse of AI technology on personal information. Secondly, it deeply analyzes the trend that the illegal use of AI technology gradually leads to the gradual commercial value of personal privacy with the case of personal information leakage by face recognition technology exposed in China ‘315’ program.

 

Causes of privacy risks in AI applications

  • Technical factors

The development of AI technology is inseparable from big data technology, and big data technology also speeds up the revolution of AI technology accordingly. The process of big data technology includes four links: data collection, storage, utilization and destruction. When big data technology is used to seek personal gains, it is inevitable to exchange sensitive personal data for this part of the benefits and at the same time to endure the negative effects caused by it.

According to Saeed et al. (2022, p. 388), the digitalization and mass characteristics of information make personal privacy in the big data environment expressed in the form of data, including location tracking, attacking data, stealing information and other acts of privacy infringement. In personal data collection, the size of the data and complex structure can not achieve effective separation of data in data protection technology. Resultly, data flow is no longer restricted by time and space, significantly improving the data acquisition of occult ( Kitchin, 2014 ).

  • The subjective reasons

Lack of personal privacy awareness leads to privacy risks. On the one hand, respect for others’ privacy awareness is usually in a state of absence, which leads to personal data may be used, stolen or even abused by others at any time, thus making privacy security issues increasingly prominent. On the other hand, most people use intelligent devices for network activities, such as specific social registration applications, accurate information registers new users easily with people, and the protection of private data is too relaxed.

 

Main types of privacy infringement in AI applications

  • Excessive collection of personal information

In terms of the phenomenon of over-collecting user information, many mobile apps is able to access the privacy of the background application and collect the user’s personal information without the user’s permission or consent. Many apps will pop up a message asking users to authorize various types of information rights during the installation process, including contacts, messages, cameras, microphones, geolocation, etc. and to ask users to recognize the app to provide privacy provisions. Under the circumstances, users have to accept the hegemon clause and let cross out their own part of the right (Liu et al., 2022).

  Picture1: 9 out of 10 health apps collect personal data improperly

https://voonze.com/9-out-of-10-health-apps-collect-personal-data-improperly/

The excessive collection of personal information by these apps will pose considerable risks to protecting citizens’ privacy. In the virtual space, people rely on all kinds of information and analyzed data to achieve ‘digital survival’ and have ‘digital personality’(Karyda, 2021, p.258). Kirtley, & Shally-Jensen (2019) point out that digital personality includes both voluntary and forced personality. The personality voluntarily established is mainly the personal information provided by the information subject, including the personal identity information, health status, social relations, property status, daily and network activities, etc. The personality that is forced to be established is mainly the content about users’ privacy obtained by intelligent equipment and application providers through the analysis of users’ usage records and personal information by AIT, such as users’ consumption preferences (Francis & Francis, 2020, p.43). These will be used by some social media platforms and shopping websites to achieve precision marketing.

 

  • Illegal disclosure of personal privacy

Under the background of big data, AI technology is playing an increasingly important role in various industries, penetrating various fields and driving industry development. At the same time, we have to admit that it brings about unavoidable hidden threats. Take the data leakage incident of Facebook as an example. In March 2018, the New York Times exposed that Facebook had caused more than 50 million users’ private information to be leaked by a company named Cambridge Analytica (Velempini, & Nyoni, 2018, p.29). The leaked data includes users’ mobile phone numbers and names, identity information, educational background and credit information and is used to target advertising.

Picture2: 533 million Facebook users’ data leaked – are you one?

 https://www.blackphone.co.uk/news-533-million-facebook-users-data-leaked-are-you-one-141

And in this case, on the one hand, is due to the use of intelligent applications of common users to its private data lack of crisis consciousness and safety protection measures. On the other hand, the Facebook application rules only require the user to separate authorization can collect related information, the user will its privacy settings for the default public option provides an opportunity for the third party to capture data. Therefore, as Kang (2019) thinks, Facebook has been criticized because it fails to protect users’ private data and lacks a critical review of the purpose for which third parties obtain data. Also, it lacks the necessary monitoring of the effective use of data by third parties, allowing personal data to be abused by interested parties. The platform itself did not directly leak users’ data, but a third party misused the data. This platform authorizes the abuse of data by second parties, which speeds up the process of privacy disclosure.

  • Illegal trade in personal privacy

In the era of AI, the transaction of personal information has formed a complete industrial chain. In this digital virtual space, almost all of a person’s crucial private information is exposed, including id number, home address, license plate number, mobile phone number and accommodation record, all of this information are for sale.  People’s commonly used smartphones, computers and social media platforms are recording our life trajectory, and various spam advertisements and emails can be accurately pushed, including sales calls, fraud messages, etc (ACCC, 2018, p. 181).

To explore the reasons for these companies to understand user preferences and interests accurately, it is the illegal trading behavior of personal privacy that exists in the application of AI. When users reserve personal information on websites or enterprises, these enterprises often share and trade illegally with other individuals and enterprises while ignoring the security of citizens’ privacy.

 

Case study of face recognition information leakage

As an essential branch of AIT, face recognition technology is a biometrics technology that can identify people by analyzing their facial features. At present, portrait collection in China appears in immigration management, road traffic management, criminal investigation, and applied to high-speed railway station ‘face verification recognition’ and various payment platforms such as Alipay ‘face-scanning payment ’.It can be seen that the application scope of face recognition technology is expanding.

Face recognition technology can identify the features of the face and bind the face information with personal identity, financial information, behaviour patterns, location, and other personal privacy information. Goggin (2017) argues that the above information is ‘sensitive information’ that is closely related to personal privacy and could cause permanent and irreversible damage if misused. Moreover, compared with fingerprints, genes, voice and other biological information used for identification, the exposure of the human face is higher. Passive collection of human face images can be realised remotely and not easily detected, which also means that facial information is easier to be stolen, resulting in the leakage of personal privacy ( Marwick, & Boyd, 2018, p.1158).


Picture 3: ‘It is impossible to prevent the leakage of face data in China, where the introduction of face recognition technology is advancing,’ said an expert.

https://gigazine.net/gsc_news/en/20201013-china-facial-recognition-security

In the 315 evening party of 2021 International Consumer Rights Day broadcast by CCTV ( 2021), the program exposed many shops that use specially-made cameras to obtain face information and make use of it, especially the ‘leaders’ in various industries. In the offline stores of many bathroom enterprises, cameras with face recognition functions are installed. They can capture the customer’s face information more clearly and accurately, record the customer’s gender, age, and mood, and even manually add all kinds of remarks for the captured customers and label them. Once the system is marked with the corresponding label, all stores will know the customer’s identity information, which store the customer goes to in the future, and the frequency of visits can be seen through the system. Under this circumstance, individual behaviours, preferences and consumption habits are entirely exposed to the merchants and are ‘accurately pushed’ or ‘big data discriminatory pricing’ by the merchants.

Picture 4: Nearly 80% of people in China worry about facial recognition data leaks

https://www.scmp.com/abacus/news-bites/article/3041300/nearly-80-people-china-worry-about-facial-recognition-data-leaks

Due to the unique characteristics of the human face, in the commercial environment, many commercial service providers in the consumer are unaware of the use of face recognition technology to collect customer face information continuously and set up the corresponding files. However, these service providers do not inform consumers about the collection of information, and consumers have no way of knowing whether stores are doing face recognition. Meanwhile, according to the Face Recognition Technology Abuse Report (NISSTC, 2021), more than 80 percents of apps have facial recognition installed; Half of those apps don’t ask users for permission to enable facial recognition, and don’t have a separate protocol for collecting faces. It can be seen that this kind of face information abuse is prevalent. Merchants combine the collected face information with consumers’ home address, work, social contact, age, preferences and other personal information and extract more detailed personal information through information integration analysis. This is not only unfair to consumers, but also an infringement of citizens’ privacy.

 

Recommendations for Privacy Protection in AI Applications

It can be seen there are countless cases of invasion of privacy caused by AIT, which is in the process of technology development, and social progress is inevitable and need to respond to a challenge actively. Therefore, based on expanding the application field of AI, all parties should also strengthen the protection of privacy and actively and effectively deal with the privacy infringement risks in AI applications.

  • Perfect privacy protection legislation  

Take China as an example. Currently, there is no particular Data Protection Law or Privacy Protection Law. Privacy protection is scattered among different departments, making it difficult to deal with the privacy risks in AI applications. In the legislation of privacy protection, on the one hand, it is necessary to improve the notification rules, and the data controller should timely disclose information to users in a clear, detailed and comprehensible way and ensure adequate access to users (Karppinen, 2017, p.96). On the other hand, the owner is given the right of ownership to enhance the user’s ability to control personal privacy data in the environment of privacy risk in AI applications, such as the right to carry data and the right to forgetting data ( Friedewald et al., 2020, p.21).

  • Strengthen industry self-discipline for privacy protection

As a practitioner of AIT, the intelligent enterprise should do an excellent job of self-regulation and self-management, making up for the lag of law to a certain extent and promoting the industry’s sound development. Torra (2017) suggests it can be started from three aspects: the establishment of a self-law management organization of the industry, the establishment of the whole industry’s self-law contract, and the implementation of a privacy security authentication mechanism.

  • Enhance people’s awareness of privacy protection

In the context of big data, most people use intelligent mobile terminals to interact on the Internet, and these people usually do not pay enough attention to personal information. According to Mori & Camp (2020, p.23), in the registration process, log in or browsing on the website platform, they expose their personal information at will and disclose their private information to the public without hesitation, creating opportunities for infringement. Murdoch (2021, p.68) believes that the government’s relevant departments should do an excellent job in the publicity of personal privacy protection, let people understand the way of personal privacy disclosure, the degree of harm, clear the importance of protecting personal privacy.

 

Conclusion

To sum up, the application of AI makes information exchange and information sharing more rapid and agile. Data is constantly collected and integrated, resulting in the continuous collection of personal information, illegal disclosure, and even illegal trade. Other problems are becoming increasingly prominent. And with face recognition technology, some businesses illegally collect sensitive personal information, and archived marking cases are no longer a minority. Therefore, only based on the perfection of privacy protection legislation, strengthening industry self-discipline and enhancing people’s awareness of privacy protection can we cope with the threat to privacy in the era of big data.

 

Reference

Australian Competition and Consumer Commission (2018). ACCC and AER issue joint 2018-19 annual report.M2 Presswire.

Bernes, A. (2021). Privacy and Data Protection in Software Services. Springer Singapore Pte. Limited.

China Central Television (2021). How is your personal information leaked?

Francis, & Francis, J. G. (2020). Privacy : what everyone needs to know. Oxford University Press.

Friedewald, Önen, M., Lievens, E., Krenn, S., & Fricker, S. (2020). Privacy and Identity Management. Data for Better Living: AI and Privacy 14th IFIP WG 9.2, 9.6/11.7, 11.6/SIG 9.2.2 International Summer School, Windisch, Switzerland, August 19–23, 2019, Revised Selected Papers (1st ed. 2020.). Springer International Publishing. https://doi.org/10.1007/978-3-030-42504-3

Goggin. (2017). Digital Rights in Australia. The University of Sydney. https://ses.library.usyd.edu.au/handle/2123/17587

Kang, M. (2019). The Roadmap to 6G Security and Privacy. IEEE Open Journal of the Communications Society, 2, 1094–1122. https://doi.org/10.1109/OJCOMS.2021.3078081

Karppinen. (2017). Human rights and the digital. In H. Tumber & S. Waisbord (Eds.), The Routledge Companion to Media and Human Rights (pp. 95–103). https://doi.org/10.4324/9781315619835

Karyda, M. (2021). Forming digital identities in social networks: the role of privacy concerns and self-esteem. Information and Computer Security, 29(2), 240–262. https://doi.org/10.1108/ICS-01-2020-0003

Kirtley, & Shally-Jensen, M. (2019). Privacy Rights in the Digital Age. Grey House Publishing.

Kitchin. (2014). The data revolution : big data, open data, data infrastructures and their consequences. SAGE.

Liu, Huang, L., Yan, W., Wang, X., & Zhang, R. (2022). Privacy in AI and the IoT: The privacy concerns of smart speaker users and the Personal Information Protection Law in China. Telecommunications Policy, 46(7). https://doi.org/10.1016/j.telpol.2022.102334

Marwick, & Boyd, D. (2018). Understanding Privacy at the Margins: Introduction. International Journal of Communication, 12, 1157–1165.

Mori, Furnell, S., & Camp, O. (2020). Information Systems Security and Privacy 5th International Conference, ICISSP 2019, Prague, Czech Republic, February 23-25, 2019, Revised Selected Papers (1st ed. 2020.). Springer International Publishing. https://doi.org/10.1007/978-3-030-49443-8

Murdoch. (2021). Privacy and artificial intelligence: challenges for protecting health information in a new era. BMC Medical Ethics, 22(1), 1–122. https://doi.org/10.1186/s12910-021-00687-3

National  Information Security  Standardization Technical Committee. (2021) Face Recognition Technology Abuse 2021-22 Annual Report

Saeed, Hasan, M. K., Obaid, A. J., Saeed, R. A., Mokhtar, R. A., Ali, E. S., Akhtaruzzaman, M., Amanlou, S., & Hossain, A. K. M. Z. (2022). A comprehensive review on the users’ identity privacy for 5G networks. IET Communications, 16(5), 384–399. https://doi.org/10.1049/cmu2.12327

Torra. (2017). Data Privacy: Foundations, New Developments and the Big Data Challenge (1st ed. 2017.). Springer International Publishing. https://doi.org/10.1007/978-3-319-57358-8

Velempini, & Nyoni, P. (2018). Privacy and user awareness on Facebook. South African Journal of Science, 114(5-6), 27–31. https://doi.org/10.17159/sajs.2018/20170103