The leakage of personal information and how governance

The leakage of personal information and how governance

 

Introduction

 

In this era of digital prosperity, the Internet has become a part of people’s life. People rely on the Internet for work, entertainment, and socializing, and enjoy the convenience it brings. However, the Internet is not a pure land. There is much industry chaos hidden under people’s daily network behavior is also threatening people’s network security. This blog will take “data leakage scandal on Cambridge Analytica” and “a crime case of face recognition ” as examples to analyze the causes and potential threat factors of personal information leakage. In addition, this blog also provides suggestions on how government regulators, Internet companies and the public should implement supervision and protection.

 

https://www.cnbc.com/2018/04/26/facebook-concealed-truth-of-cambridge-analytica-scandal-uk-mp-says.html

 

Leakage of Personal Information

 

Facebook, the world’s most popular social networking platform, has been hit by a scandal involving users’ personal information. In 2016, Cambridge Analytica, a British political consultancy, obtained the personal information of some 87million Facebook users through a question-and-answer app and used it to target them with ads to help the Trump team win the 2016 presidential election. The scandal broke in March 2018. While some claim that Cambridge Analytica hacked Into Facebook’s user database without Facebook’s knowledge, the scale of the breach is a fact of life. Meanwhile, Cambridge Analytica also obtained data information of Facebook users’ friends through the Q&A software, even though their friends had never used the app (Wu, 2019). If such personal information is stolen or bought and sold without users’ consent, or even used for political purposes, the consequences are dire.

 

Why would Facebook users be at risk of having their personal information leaked? At its core, the technology used by Facebook is a potential contributor to the leak. As one of the pioneers of the “big data era”, Facebook drives the development of the big data industry chain. Facebook generates a “big data” system from the massive unstructured data (text, applications, positioning, pictures, videos, etc.) generated by its social network every second, and forms valuable “data portraits” after the screening, comparison and processing (Zhang, Wen & Wang, 2018). Personal data can be collected in a variety of ways. Basic personal information collection can be called explicit information collection. People sign up to enter basic information such as name, contact information or age, and are also asked to agree to a “consent to collect information” agreement. If you do not agree to this agreement, the user cannot complete the registration. Under normal circumstances, most users do not click on the protocol to see what is in the end, just hurriedly check and quickly complete the registration. Therefore, this part of basic information can be regarded as the default data that is allowed to be collected by users. However, a large amount of data beyond people’s imagination is being collected and analyzed, which can also be called implicit information (Sina Finance, 2021). In Facebook’s case, cookies are one of the Internet company’s techniques for accessing user information. Cookies are small text files stored on the user’s browser by the Web server based on THE HTTP protocol, which covers the user’s Internet browsing information (Zhang, Wen & Wang, 2018). Facebook’s Cookie Policy (2022) mentions our use of cookies to validate your account and determine when you are logged in so that we can make it easier for you to access meta-products and show you the appropriate experiences and features. For example, we use cookies to keep you logged in when you browse Facebook pages. In other words, Facebook can use cookies to track users’ online behavior. The results of this behavior tracking may include a user’s web browsing history, Internet preferences, consuming behavior, etc. In addition, cookies can also analyze the user’s personality, hobbies, living habits, sexual orientation, religious beliefs, and even political inclination by posting texts, photos, forwarding and liking on Facebook. Internet companies then easily recommend content or ads that interest users. However, once the personal information is leaked and stolen, or used for illegal activities, it will lead to the abuse and infringement of users’ confidence and cause irreversible damage to users’ economic property and mental health as well.

 

https://new.qq.com/rain/a/20210421A0B6ZR00

 

 

Risk of AI-driven Face Recognition

 

Another technology that Internet companies are rushing to use is face recognition. Face recognition is a technology based on biometric recognition and is an important application of Artificial Intelligence (AI). It was originally used for security systems, which could quickly lock suspects and conduct a series of algorithms to ensure people’s safety. With the continuous development of Artificial Intelligence, this function has been applied to many places. For example, instead of entering a password manually to access a website, face recognition cameras can recognize faces in photos and accurately match users’ accounts. In 2010, Facebook debuted a new feature, ‘Tag Suggestions,’ that automatically finds and categorizes targeted photos in your friends’ albums. In 2018, Facebook released a new feature in the photo push function, which will notify you if Facebook finds your photos through its algorithm, and this feature also applies to photos with privacy attributes (Zhang, Wen & Wang, 2018). DeepFace (Facebook’s face recognition research project) published the research results in 2014 (Taigman, Yang, Ranzato & Wolf, 2014), which showed that the face recognition accuracy of DeepFace could reach 97.25%, almost the same standard as the human brain. It means that the accuracy of the face recognition project has reached an extremely high level, and these high-precision results need massive data for development and research. Therefore, the user’s facial information and other personal information are the key factors to promote the development of face recognition technology. Thus, the security of personal information becomes particularly important.

 

Face recognition has comprehensive advantages in cost, technology, threshold, and experience. It not only suppresses traditional technology such as digital passwords and electronic certificates, but also security technology such as fingerprint and sound patterns cannot shake its position. People enjoy the convenience brought by AI-driven face recognition technology to social life and entertainment, but they are also troubled by the security brought by it. Face recognition data abuse and leakage cases are common as well. In China, 315 gala of 2021 (i.e. 315 gala is a public benefit gala co-hosted and live broadcast by the China Media Group and national government departments to safeguard consumer rights and interests on the evening of March 15 each year) exposed a lot of face information leakage and misuse. For example, many businesses in the case of obtaining customers’ face recognition information for commercial use without informing or obtaining consent; some primary school students can easily open the delivery cabinet by printing photos of the recipients and “brushing their faces”; 3D printing facial images successfully cracked Alipay face payment in 10 seconds; More than 5,000 photos of faces are sold online for a few cents… …Face recognition was essentially developed and exploited for safe and friendly purposes, but now it is being driven by profit to change the use of data. Here is a criminal case in China as an example. In 2020, a company illegally engaged in the certificate attachment industry used AI face-changing technology to “forge faces” and help professionals who want to be attached to carry out face recognition authentication on the official website. The company collected facial information from homeowners in multiple communities. They illegally stole the owner’s face information which was input into the community security system, and each piece of face information contains the owner’s complete data such as name, mobile phone number and address etc. The police found that the company illegally obtained 9,203 pieces of information from others, and provided them to others after obtaining them, adding up to 30,400 pieces of information illegally provided to others. According to the random sampling survey in proportion, all 100 pieces of personal information are valid information (Jiangsu Media Group – Convergence Media Center, 2021).

 

This kind of vicious incident seems to play out frequently in the high-speed digital age. Whether it is the leakage of personal information on social platforms or the use of facial recognition technology to commit crimes, the victims are ordinary people. Although many Internet companies involved in user information leakage have been punished, and criminals involved in AI crimes have been sentenced by the court, how can people’s loss of interest be compensated? This is a question for government regulators, Internet giants and people themselves to consider.

 

 

How to governance

 

First, people need to build awareness of protecting information. According to the Investigation Report on The Protection of Rights and Interests of Chinese Netizens 2021, in the past year, the total loss of netizens was about 80.5 billion yuan due to personal information leakage, spam, fraud and other reasons (Internet Society of China, 2021). The reason includes people’s lack of awareness of network protection. For example, many apps and websites may automatically pop-up dialog boxes asking whether they can obtain location information or share data. Some users blindly click “Agree” to quickly enjoy the content provided by apps or websites, which will be a hidden danger for the leakage of personal information. Therefore, people need to strengthen their awareness of self-protection and pay attention to the various agreements, terms and dialog boxes on the website where they need to make choices when conducting online activities. At the same time, the network risk education for students should also be strengthened. Now the popularity of the Internet is very high, and the lower limit of the age group using the Internet is also falling, so schools in different education stages should guide students to cultivate and enhance their awareness of personal information security.

 

Secondly, the government, industry associations and other regulatory authorities need to formulate clear laws and regulations to monitor the behavior of Internet companies. In 2018, consumer groups in several European countries complained to regulators about the “Cambridge Data scandal”, accusing Google of violating the data protection regulations of the European Union (Wu, 2019). However, what the regulatory authorities need to do is to predict and supervise the possible leakage of personal information. It means that the regulatory authorities need to use laws and regulations to limit the amount of user information that Internet companies can collect, store and analyze. At the same time, the regulatory authorities should investigate and evaluate the technology used by Internet companies and stop any leakage or violation promptly.

 

Finally, Internet companies need to be disciplined. In addition to profit purposes, protecting users’ data security to the greatest extent is also a crucial goal for all Internet companies. The Cambridge Analytica scandal and the facial-recognition crimes committed by affiliated companies are a reminder that the Internet industry cannot use users’ personal information for any illegal commercial or political conduct. Flew (2019, p.24) has referred to questions raised by the Consumer Council about whether consumers understand the value of the data they provide, the extent to which platforms collect and use their personal data for commercial use, and how to assess the value or quality of the services they receive from digital platforms. Therefore, the Internet industry should have a clear moral bottom line and a sense of self-discipline and solve the crisis of trust between the government and the public. This is conducive to the sustainable development of the Internet industry.

 

 

Conclusion

 

The leakage of users’ personal information in 2018 was undoubtedly the biggest scandal of that year. The Internet giant uses technologies such as cookies, etc., that track, capture, save and analyze user data. At the same time, AI-driven face recognition technology has now been widely used. These technologies are permeating every aspect of humanity, so ensuring the security of information is of paramount importance. Therefore, people should strengthen the awareness of network security from their point of view, and schools and teachers at different stages of education should also enhance security education. In addition, the government, network associations and other regulatory authorities should provide clear laws and systems to restrain the behavior of Internet companies. Finally, the key to the sustainable development of the Internet industry lies in maintaining self-discipline.

 

 

 

 

Reference

 

  1. Cookie Policy. (n.d.). Retrieved April 8, 2022, from https://www.facebook.com/policy/cookies/

 

  1. Flew, T. (2019). Platforms on Trial. Intermedia, 46 (2), 24–29.

 

  1. Internet Society of China. (2021). Investigation Report on the Protection of Rights and Interests of Chinese Netizens. Retrieved from Baidu Wenku website: https://wenku.baidu.com/view/80dbd1d6df36a32d7375a417866fb84ae55cc30a.html

 

  1. Jiangsu Media Group – Convergence Media Center. (2021). Fake facial recognition with AI face-swapping technology. Retrieved from Tencent News website: https://new.qq.com/omn/20211220/20211220A03U4S00.html

 

  1. Sina Finance. (2021, April 8). Facebook responds to data breach of 533 million users: It won’t tell users. Retrieved April 8, 2022, from Sina Finance website: https://finance.sina.com.cn/stock/usstock/c/2021-04-08/doc-ikmxzfmk5715511.shtml

 

  1. Taigman, Y., Yang, M., Ranzato, M., & Wolf, L. (2014). DeepFace: Closing the gap to human-level performance in face verification. 2014 IEEE Conference on Computer Vision and Pattern Recognition. IEEE. Retrieved from http://dx.doi.org/10.1109/cvpr.2014.220

 

  1. Wu, J. (2019). Outlook 2019 | experts warn of large harm to the AI technology, global regulation is an inevitable trend. Retrieved from Yicai website: https://www.yicai.com/news/100102926.html

 

  1. Zhang, H., Wen, M., & Wang, Y. (2018). Legal Protection of Personal Information from Facebook Data Abuse Event. Journal of ChongQing University of Posts and Telecommunications, 30(6), 56–63. https://doi.org/10.3969/j.issn.1673-8268.2018.06.008