Introduction
The network process of current society continues to deepen, and the internet has profoundly affected economic development and people’s daily life. Some large network platforms have even become digital infrastructure supporting the operation of society. The rise of online platforms is reflected in their ability to manipulate digital markets and the disruptive impact of their business models on traditional industries. With the popularization of the Internet, the content on the Internet has the characteristics of immediacy, self-media, interactivity, and decentralization, so netizens can become content creators to a certain extent. In recent years, the content form has become more and more complex from the initial text to the picture, and then to the current video. The amount of data on the Internet has increased wildly, and more users are producing more content every day. With the explosion of information, content and users, Internet companies have created a new era, but this has also caused some problems . For example, with the explosion of content on online platforms, a large amount of bad and harmful information has also been generated. Therefore, content security has become an important part of Internet ecological governance, which also makes the content moderation of Internet platforms more and more important.
This blog will analyze the importance of current content moderation for platforms, and discuss the challenges of content moderation encountered by online platforms in the face of the explosion of content and the potential Internet security risks faced by users. In addition, I will analyze the challenges and adjustments that Tik Tok, the world’s largest short video platform, faces with content censorship under different cultures and government policies around the world.
Why do platforms need content moderation
By January 2021, the world population was 7.83 billion and the number of Internet users reached as large as 4.66 billion (Lambert, 2020). The development of the Internet has given rise to more and more online platforms, which have penetrated into all aspects of society and formed an important business model (Van Dijck, Poell& de Waal, 2018, p.16). To a large extent, getting involved in the traditional social management functions of the government, this business model has replaced the public services and policies of the network society which should be undertaken by the state.
According to Golemanova (2021), content moderation refers to the fact that online platforms mainly detect and identify texts, pictures, audios, and videos that contain pornography, graphic violations, violence, terrorism, prohibition, advertisements, and other spam, and provide screening and processing capabilities in a systematic way to keep content safe.

A picture of the content moderation of the platform
First, content moderation has become a safeguard of most platforms for users to use them. When platform moderation cannot keep up with content production, it will lead to the production of a lot of vulgar and even illegal content. And these vulgar and even illegal content can be viewed by a large number of users through artificial intelligence, which will be recommended by a personalized algorithm for accurate distribution, and even lead to an imbalance in the evaluation criteria of a large number of users’ value. On Douyin, there have been many cases of users posting bad videos for popularity. Due to the low threshold for uploading videos and the short video time requirements, some netizens have taken some difficult or curious actions to gain attention. In March 2018, a father in Wuhan, Hubei pulled his two-year-old daughter to imitate the difficult somersaults in the short video from Douyin, accidentally causing serious damage to his daughter’s cervical spine (Hubei Jingshi, 2018).

News of Wuhan father making his daughter disabled
Secondly, content moderation has become a necessary condition for the survival and development of platform-based media, and to a certain extent, it can help the platform develop healthily. On the one hand, content moderation is similar to information cleaning, which can help platforms build a healthy content ecosystem. A healthy ecology can encourage users to produce high-quality content and form a better content production and consumption atmosphere. On the other hand, the user base of the platform will attract brands to advertise. For example, comments of short promotional videos on Douyin can be systematically managed by the publisher, which enables users who watch the video to quickly obtain the information the promoter needs. Therefore, content moderation behaviors such as Douyin can control the comment information related to the brand to a certain extent, help the brand to build and maintain its image, thereby helping to improve the advertiser’s product, brand exposure and investment rewards, and realizing business values of closed loop. In this way, more brands will take the initiative to join this camp and advertise on such platforms.
In addition, most Internet platform companies need to screen high-quality content through content moderation to fulfill their social responsibilities and establish a good social image to attract users and make profits. Kuaishou, another well-known short video platform in China, has repeatedly appeared vulgar and pornographic content in the past few years, causing social discussion, which also makes Kuaishou’s user base in China concentrated in small rural cities, and it is difficult to expand its user base. The reason is that a lot of illegal content was not avoided in content moderation at first, which left many netizens with a vulgar social image of the platform.
Difficulties and challenges of platform content moderation
First of all, with the development of various platforms, internet content violations have the characteristics of diverse coverage scenarios, multiple data variants, and strong confrontation. The coverage of illegal content on various platforms has reached the point where it is pervasive. News content, user comments, user avatars, nicknames…any scenes with content released find it difficult to escape the harassment of illegal content. For example, in these years there has been an ASMR content type on the live broadcast platform, which seems to be normal but is actually mixed with a lot of pornographic content (CGTN, 2018). Moreover, with the change of the scene and the evolution of the confrontation method, a large number of illegal data types and variants have appeared. For example, on Weibo, there are often pornographic texts(words) in some content, and these content, in order to avoid the progress of the Weibo moderation mechanism, begin to take corresponding adversarial methods, ranging from the initial avoidance of text-sensitive words to the use of font splitting, especially symbol confusion, and illegal content embedded in pictures. In short, the antagonism of illegal content is getting stronger and stronger, which is reflected in the certain organization and confrontation in the release of illegal content, and even changes in content form and account alteration to confront detection or operation strategies.

Text related to pornography on Weibo
Secondly, in addition to the continuous evolution of illegal content on the Internet, the current moderation mechanisms of various platforms are still far from perfect. With the development of artificial intelligence, many Internet platform companies apply this technology to content moderation, because using AI technology machines can process content moderation on a large scale (Roberts, 2019, p37). However, text, pictures, voice, video and other content have high requirements for moderation, and there is still a certain proportion of misjudgments in AI recognition. When doing audio content moderation, the machine recognizes the text converted from the voice, and then matches the keywords, which will be matched with the model. The spam with the highest similarity is eliminated, which is the more common solution at present. However, for scenarios such as dialects and noises with different audio tracks, the machine cannot handle them properly. In addition, when moderating live content, short videos, long videos, etc., the machine often extracts key frames to moderate the video content, which cannot cover the entire picture, so there will be “flashes” of violent or pornographic content. For the moderation of text content, keyword filtering technology is often used, and there will also be a certain rate of false positives and missed detections. Therefore, many Internet platform companies mostly use both manual and artificial intelligence methods to conduct content moderation. However, manual moderation has a large demand for manpower, and these Internet platform companies need to spend a lot of financial resources to train content moderators (Roberts, 2019, pp.38-42). Content moderators, on the other hand, are faced with a huge, intensive and complex work situation and need to make subjective judgments, all of which lead to huge workload and work pressure for content moderators (Roberts, 2019, pp.39). In February 2022, the Chinese video platform Bilibili experienced a sudden death of a content moderator. The content moderator worked over time on official holidays, causing sudden death (Shen, 2022). In addition, the huge workload will also lead to mistakes in the content moderator’s judgment, and the content moderator also has a strong subjectivity in the specific work, which leads to the appearance of bias, so it is difficult to ensure the accuracy and objectivity of moderation.

Content Moderators Work Environment
Finally, in addition to the treaties formulated by the platform, the content on the platform is also subject to government policies and legal treaties, and the policies and laws will be updated with the progress of the times, which requires the moderation mechanism of all platforms to be constantly updated and adjusted. In the past, China’s Internet platforms had a lot of education-related content related to the stage of compulsory education in China. However, in July 2022, the Chinese government issued a “double reduction(effectively reduce the heavy homework burden and after-school training burden of students in the stage of compulsory education.)” policy, which prohibits all platforms from publishing subject-based course content, live broadcasts and commodities for preschool children, primary and secondary schools (including high schools) (Mikesell, 2021). When such a policy is released, all platforms need to follow up the renewed moderation system in a timely manner so as not to be punished for violating the policy.
Therefore, as the digitalization of content becomes more and more extensive and the complexity of content types continues to increase globally, various platforms still need to constantly upgrade and explore the corresponding content moderation mechanism.
The success of Tik Tok (Douyin international edition)
As a short video product belonging to a Chinese company-Byte Dance, Tik Tok’s overseas development has not been smooth sailing. The platform content moderation has always been the biggest challenge Tik Tok has encountered overseas, and has thereby been forcibly removed from the shelves by many countries.
At first, Tik Tok has been questioned overseas in succession due to its Chinese background, such as “Tik Tok’s content moderation system may be dominated by Chinese censorship.” (BBC News, 2021). To this end, Byte Dance has created a large-scale moderation team around the world, achieving complete independence in terms of data, product architecture, personnel, and commercialization. In order to respect the requirements for content health under different cultural and legal backgrounds, Tik Tok is also gradually establishing local operation teams in various countries, and handing over the relevant content operation management to local teams familiar with local culture and laws (WSJ news exclusive, 2020). The specific content moderation principles of Tik Tok will use different moderation scales according to the laws, regulations, culture and customs of the location of the moderation object (WSJ news exclusive, 2020). Therefore, it is precisely because the platform content moderation of Tik Tok is based on cultural, social background and other factors that Tik Tok can now achieve localized operations around the world.
Secondly, the technical bottleneck of AI audit is also reflected in the process of Tik Tok’s international development. Southeast Asia, India, and Africa are the main directions for the international development of many Internet platforms. These countries and regions have the characteristics of diverse languages. AI technology can only use domestic data for image and video review, which is bound to be limited. It is for this reason that Tik Tok has encountered obstacles in the Indian market, such as complex language systems, diverse religious and cultural backgrounds, and even sharp issues of local society. However, India, with a population of 1.3 billion, has a huge market space, so it cannot be easily abandoned by any media platform, including Tik Tok (WSJ news exclusive, 2020). In response, Tik Tok has been continuously developing artificial intelligence moderation technology, and has also followed the combination of both AI technology and manual moderation teams in countries and areas such as India, which can support 14 languages and make culturally-related decisions in local languages (WSJ news exclusive, 2020).
Conclusion
In conclusion, there is no doubt that with the progress of science and technology, people will rely more and more on the Internet platform, so the social responsibility of the Internet platform will become bigger and bigger. Therefore, all platforms must pay attention to content review, because it affects all aspects of the platform, including operation order, business image and so on. Because of this, major platforms are also improving their content moderation mechanisms and techniques. In addition, platforms need to encourage and guide content creators to create high quality content with good values to reduce the pressure of content moderation. For those content moderation staff who are bearing increasingly heavy workload, apart from the treatment and bonus that the Internet platform should give them, platform users should also actively cooperate with their work, try not to create illegal content and actively report illegal content. In short, the future of content moderation on platforms is full of both potential and challenge.
References:
BBC News. (2021). Donald Trump-era ban on TikTok dropped by Joe Biden. BBC News. https://www.bbc.com/news/technology-57413227
CGTN. (2018). Is this porn? Why China bans ASMR videos. CGTN. https://news.cgtn.com/news/3d3d674e79556a4d78457a6333566d54/index.html
Golemanova, R. (2021). What is content moderation? Imagga. https://imagga.com/blog/what-is-content-moderation/
Hubei Jingshi. (2018). Wuhan father learns Douyin to challenge difficult, misses causing severe spinal cord injury to two-year-old daughter. https://www.thepaper.cn/newsDetail_forward_2033382
Lambert, S. (2020). What is the number of Internet users in 2021? There are currently about 4.66 billion active internet users in. FinancesOnline.Com. https://financesonline.com/number-of-internet-users/
Mikesell, D. (2021). The “double reduction” crackdown and the future of private education in China. The China Guys. https://thechinaguys.com/china-double-reduction-policy-private-education-tutoring-crackdown/
Roberts, S. (2019). Understanding Commercial Content Moderation. In Behind the Screen: Content Moderation in the Shadows of Social Media. Yale University Press,pp.33-72
Shen, L. (2022). Death of a Bilibili censor reignites overwork debates in China. Protocol. https://www.protocol.com/china/bilibili-content-moderator-death-china
Van Dijck, J., Poell, T. & de Waal, M. (2018) The Platform Society. Oxford: Oxford University Press, pp. 5-32 (‘The Platform Society as a Contested Concept’).
WSJ NEWS Exclusive . (2020). TikTok to stop using China-based moderators to monitor overseas content. The Wall Street Journal. https://www.wsj.com/articles/tiktok-to-stop-using-china-based-moderators-to-monitor-overseas-content-11584300597