
Introduction
In recent years, social media platforms are developing quickly, such as instagram, facebook, tiktok, etc. Social media platforms are gradually becoming tools for ordinary users to see the world. Some figures show that over 53% of the world’s population is using social media. This means that social media platforms hold almost the personal information of the vast majority of people in this world. In this era of big data, all user information using social media is mastered by algorithms. Social media platforms recommend suggested content to different individual users through big data and algorithm analysis. This has raised suspicions among users of the platform’s misuse of personal data.
According to Andrejevic and Mark (2019), automated systems that categorize users and distribute information on an opaque basis raise a number of ethical issues. The algorithm recommendation of artificial intelligence has been controversial for a long time. Even though there are more and more regulations and laws for the protection of personal information, it does not mean that personal data can be fully protected. Not only do users find themselves constantly being recommended content that may be of interest on individual social media platforms, but they also find that content that they have viewed and liked on one social media platform also appears on another. The platforms are interconnected, linking personal preferences through the transmission of personal information data, and the surveillance of big data seems to have become pervasive in the lives of users. Even if users choose not to track personal browsing data when using social media platforms, users will be accurately captured by big data and receive recommended content.

Therefore, this blog will take some social media platforms as examples to analyze the ethical issues brought about by their recommendation by artificial intelligence algorithms. In fact, such algorithms for tracking personal information have raised many ethical issues. Many social media platforms have made corresponding governance measures and commitments to the public. However, the nature and operation of algorithms means that these ethical issues are difficult to resolve completely.

The case of the Chinese version of TikTok: the success of the algorithm
The Chinese version of Tiktok is a short-video social media platform launched by ByteBeat in September 2016 and has leapt to become one of the top 10 hottest social media platforms in China in just a few years. In 2017, ByteBeat launched an international version of Tiktok, officially taking the short-video social media platform to the world. One of the main reasons why the Chinese version of Tiktok has been so successful is because of its powerful algorithm system. An algorithm is a process by which a machine processes data and makes decisions automatically, based on established operating rules. The scattered data is then statistically evaluated through algorithmic selection and the corresponding relevance is then found to be placed on the user (Just and Latzer, 2017).
According to Zhao (2021), the algorithm collects data about each user and gives each user personalized recommendations through the algorithm, which greatly reduces the burden on users and captures their attention without individual search. The Chinese version of Tiktok, the most typical representative of short video platforms, contains a very wide variety of short video content, yet in such a complex media content environment, Tiktok manages to achieve accurate pushing of not only large genres but even small detailed categories, relying on its global interest discovery recommendation method and device. The global interest discovery recommendation method and device is a Tiktok patent on an algorithmic system through which Tiktok achieves the main goal of the algorithm, namely the personalisation of processes and results (Zhao, 2021).
Tiktok recently released a video revealing several factors that influence the algorithm, including 1. data on video likes, comments, shares, completions and re-watches 2. topics categorized according to the type of user interest 3. device settings including geographic location and language 4. recommendations for audio including background music 5. keywords in tags and titles 6. feedback from users. Due to the great variety of these subtle factors, the personalized private recommendations are different for each user, and the Chinese version of Tiktok does not only recommend the same type of content over and over again, but also has some novel content popping up intermittently, which will also make the users less content fatigue and more attractive.
Ethical issues: privacy leaks
User privacy is one of the most significant issues of algorithmic recommender systems and accompanies the entire operation of algorithmic recommender systems and is difficult to avoid (Friedman et al. 2015 ; Koene et al. 2015 ; Paraschakis 2018). This is caused by the operational process of the algorithmic system itself. The algorithms for this data operate in a completely closed black box, in which the input and output of the data is completely closed in the rules of operation, and even if the user learns about its patterns there is no real way of knowing whether it protects the privacy of personal data (Pasquale, 2015). Milano, Taddeo and Floridi mention that the algorithmic system may do so without the user’s knowledge when extracting user data and that when data is collected, it is even more difficult to secure the data in storage. As previously stated, the Chinese version of Tiktok relies on an algorithmic automated recommendation system to retain users, and even if it offers ways to protect data privacy as much as possible, it is difficult to leave this process of collecting data and distributing it through an automated system. As a result, data privacy concerns will always exist and the risk of leakage can only be minimized in the process of storing privacy (Milano, Taddeo & Floridi, 2020).
Not just the Chinese version of Tiktok, but all of the world’s algorithm-dependent social media platforms are facing security risks from privacy breaches. One of the most recent major breaches was facebook’s data leakage case in 2021. This is not the first time facebook has had a privacy breach, but even with the experience of many breaches in data storage and multiple governance of data storage, the problem of data breaches has not been completely solved and is even getting worse. The personal information of over 530 million people was exposed to all and sundry, which directly led to a crisis of trust in facebook by its users. The information leaked is not only personal names and contact information, etc., but even FaceID and verification messages, etc. are stolen, which means not only users become transparent on social media, but also personal safety and property security are being threatened. However, from the collection to the leakage of information, users are not even aware of it, and only when their information is disclosed online by hackers do they realize that their information has been leaked.

In addition to the theft of user information, the leakage of information due to the use of users’ personal information by the social media platforms themselves to conduct transactions is one of the major reasons that exacerbate the ethical problem of privacy breaches. Welinder (2012) refers to a 2011 investigation that revealed that Facebook extracted users’ facial recognition information from their posted photos without their permission and sold it to third-party platforms. This means that not only are users’ communication and personally identifiable information unsecured, but even their biometric information may be at risk of being compromised.
Effects of Filter bubbles
In the context of the Internet era, filter bubbles can be roughly defined as a world of personalized information in the flow of information represented by social media (Bruns, 2019). Filter bubbles give users customized content based on each user’s personal information and comprehensive record, and even the results that appear for the same content searched on the same search engine vary depending on the user’s preference bias. This private customization of information content raises a number of social and ethical issues. First, users are directed to content that contains false and controversial content that can lead to social unrest and affect socioeconomic stability. Some bloggers in the Chinese version of Tiktok position themselves as people who know celebrities, and they freely spread negative news about celebrities they made up without revealing their personal identities, leading to users’ disappointment with celebrities and frustration with the government. Filter bubbles spread this content to every user who has viewed the celebrity’s news, or a related group, causing this group of users to spread rumors among themselves and increasing the problem of social opinion. Second, filter bubbles change the way users receive information, instilling filtered ideas in an opaque way. This filtered thought affects the breadth of content people are exposed to, and users are constantly reinforced with a solidified thought and do not even have a choice when faced with contradictory and opposing ideas, which can increase polarization and discourage the collision and dissemination of ideas (Andrejevic, 2019). Third, automation can also inflame users’ emotions and bring about irrational behavior with the reinforcement of a solidified single transmitted idea. For example, during the COVID-19 epidemic, the Chinese version of Tiktok would constantly recommend negative content to some users who did not trust the government, leading to irrational emotions and thus overreaction. Users keep being recommended content from the same position and keep reinforcing their position, which is also not conducive to diversity of thought, which only solidifies the user’s thinking. This has also been utilized in political campaigns by applying this automated content placement to political campaigns. Targeted online news messages are used to trigger civic tendencies that ultimately lead to political divisions. This filter bubble being exploited can also turn into a major ethical issue affecting the country’s political economy (Andrejevic, 2019). Finally such algorithm-driven automated cultural recommendations can create an unfair situation in the reception of socio-cultural information. The decisions generated in the algorithm are themselves a manifestation of inequitable decision making, and this inequitable distribution of information is only exacerbated when the information is delivered to users through social media platforms. Thus, social inequity is exacerbated.
Conclusion
As mentioned above, the algorithm of social media platforms has become an indispensable key factor for each platform to win. By using algorithmic systems, platforms can recommend the most relevant and preferred content to users and achieve customer retention by attracting their interest. Most social media platforms, such as the Chinese version of Tiktok, are working hard on this established development path. However, while algorithms may seem to be able to provide users with personalized content for easy retrieval and provide data analysis for social media platforms to generate revenue, there are ethical issues that cannot be ignored. Therefore, the governance of algorithms is particularly important in the future development of Internet algorithms. Today, there are already many social media platform companies implementing algorithmic data protection and governance of automated culture, but the results are not obvious from the cases in recent years. How to balance the relationship between data filtering and compliance with ethics is an important issue for the future of big data algorithms.
References
Andrejevic, M. (2019). Automated Media (1st ed.). Routledge. https://doi-org.ezproxy.library.sydney.edu.au/10.4324/9780429242595
Bruns. (2019). Filter bubble. Internet Policy Review, 8(4). https://doi.org/10.14763/2019.4.1426
Friedman A, Knijnenburg B, Vanhecke K, Martens L, Berkovsky S, Berkovsky CSIROS (2015) Privacy Aspects of Recommender Systems. In: Ricci F, Rokach L, Shapira B (eds) Recommender systems handbook, 2nd edn. Springer Science + Business Media, New York, pp 649–688
Just, N., & Latzer, M. (2017). Governance by algorithms: reality construction by algorithmic selection on the Internet. Media, Culture & Society, 39(2), 238–258.
Milano, S., Taddeo, M. & Floridi, L. Recommender systems and their ethical challenges. AI & Soc 35, 957–967 (2020). https://doi.org/10.1007/s00146-020-00950-y
PASQUALE, F. (2015). The Black Box Society: The Secret Algorithms That Control Money and Information. Harvard University Press. http://www.jstor.org/stable/j.ctt13x0hch
Welinder, Y. (2012). A face tells more than a thousand posts: developing face recognition privacy in social networks. Harvard Journal of Law & Technology, 26(1), 192–.
Zhao. (2021). Analysis on the “Douyin (Tiktok) Mania” Phenomenon Based on Recommendation Algorithms. E3S Web of Conferences, 235, 3029–. https://doi.org/10.1051/e3sconf/202123503029