Introduction
Social media, as an open and publicly platform, which is integrated by user-generated contents. In this open online environment, some user-generated contents are inappropriate or even unethical for other users to accept. To deal with these harmful contents which violates the law or morality and the rights of other users, different media platforms formulate rules for contents moderation. Content moderation is strategy to regulate user-generated contents post on internet sites, sometimes before the content is delivered (Robert, 2019). However, the content moderation system of social media platforms is iterative, and it can be affected by many factors, among which politics, economy, finance, natural disasters, and news have the most significant impact in regulating contents (Bromell, 2022).
With the influence of social factors, some user-generated contents are deleted by social media platforms, some user’s account are even banned. User’s content which is truth or personal opinions were limited and regulated because it is harmful to someone’s benefits. Social media platforms content moderation is not benefiting users or online environment, content moderation become a strategy to eliminate negative contents for specific people. However, content moderation is an indispensable platform regulation strategy, violation user’s digital rights only happen in certain conditions (Bromell, 2022). Generally, Social media platforms are facing challenges in content moderation to protect social media users and online environments.
This blog will generally explain the importance of content moderation in social media platforms, and its central conflicts. Illustrate current challenges in content moderation and analyze these challenges by apply Facebook content moderation cases in different countries.
Importance of content moderation
Over the last decade, the usage of social media has experienced a significant increase, social media have innovated as part of people’s life (Business World, 2021). These innovations leaded physical disconnection between users and lack of humanity, intimacy, meaningful contents (Business World, 2021). With such a large amount of contents on the online sites, content moderation is the most effective way to protect users from harmful contents. With the development of social media and digital technologies, numerous teenagers and young children become more active on the internet. Content moderation also protect minors from negative impacts of violent and harmful content. Social media platforms are fulfilled with violence, hate, and harassment. Thus, content moderation is an essential element to improve the online environment and bring back positivity online, and build more collaborative, impactful, positive platforms (Business World, 2021). Although platforms like Facebook did not moderate equally in different countries and users, the conflict is between digital rights and platform regulatory rules which created by different social media companies (Gilbert, 2021).

Challenges in online content regulation
The development of the Internet over few decades brings the era of Web 2.0, which emphasize participatory culture and free accessed online environments (Wei, et al., 2012). The Internet as a global interactive network poses many challenges for content moderation. In the era of Web 2.0, the boundaries between private and public communication are faded, at the same time, the responsible and its ascription is unclear (Bromell, 2022). User, as the content creator, generate and construct the most contents on social media platforms, it is hard to determine the responsible of content regulation should be undertaken by platforms, government or even users.
The unclear division of responsibilities for online content regulation has led to confusion in content moderation. Government powers often override platform content moderation rules, and the fact that some governments have intervened in online content regulation to manipulate elections has turned some conspiracy theories more trustworthy (Barlow, 2019). The aim of content moderation is to create civilization in the cyberspace, the intervention of government breaks the humane and fairness (Barlow, 2019). To maintain the tension of online regulation, content moderation is facing four general challenges (Bromell, 2022).
-
Maintain a free and secure internet
Today social media is defined as an open and free platform, but freedom of speech is still being concerned. Content moderation system violate user’s freedom of speech even without government intervention. Generally, the first challenge is to balance free speech and content moderation.
-
Protect individuals and communities from harmful contents and prevent damages in democracy
The practical aim of content moderation is to protect user’s rights and security. Content moderation need to protect users from contents which might bring negative impacts. For example, social media platforms like Facebook and Twitter all have conventions regard to suicide contents. Also, the contents might damage democracy is also listed in these conventions as a restriction.
-
Limit the use and abuse of Internet for commercial purposes
In addition to government intervention in regulation will disrupt the balance of content moderation, false content and information for commercial purposes will also damage the common interests. This challenge is similar with the second challenge that they all generally protect individuals and communities. However, the difference is that this challenge emphasizes the common interest, instead of emphasizing online security. As Barlow declared, we should worry about the business intervention in exact way we worry about government intervention (Barlow, in Doherty, 2004).
-
Avoid government excessive regulation to surveillance, censorship, and suppression of dissent.
As I said before, government intervention in content moderation will break the balance. Content moderation become a strategy to protect the interests for specific people. The excessive regulation in online contents is an action violate user’s digital rights. The government intervenes not to improve the online environment and protect individuals or communities, but to moderate contents for political purposes by pressure social media platforms.
Generally, the challenge of content moderation can be concluded in four perspectives which is digital rights, online security, commercial impact, and business impact, government intervention. These four perspectives affect platform content auditing to varying degrees. Although each of the challenges are independent, they are all proposed as obstacles to improve content moderation. These four challenges are four competing objectives in internet governance (Bromell, 2022).
Facebook content moderation in different countries
Facebook’s content moderation is not equally in different countries, they rank countries into tiers to decide their content moderation rules (Gilbert, 2021). The USA, India and Brazil are the countries classified as tier zero that have the highest priority and the most resources in content moderation. I will not criticize Facebook for ranking countries in tiers, Facebook have nearly 3 million users, these decisions are majority in content moderation which Facebook has to deal with. It is impossible for Facebook to apply the same resources in the countries that have only a small number of users. However, it does not mean that Facebook can ignore the content moderation in other countries.
Facebook admit the issue in content moderation which did not stop the spread of hate speech and violence in Myanmar (Gilbert, 2021). The content moderation in Myanmar was totally failure which did not accomplish the first and second challenges I mentioned. Online security was threatened by the hate speeches in Myanmar. Then in 2018, April, Mark Zuckerberg, the CEO of Facebook said: “I think it is clear that people were trying to use our tools in order to incite real harm” (Gilbert, 2021). Then in early 2019, Facebook listed four ethnic armed organizations in Myanmar and classified them as “Dangerous Organizations”, these organization were formally banned from using Facebook and its related services (Sablosky, 2021).

In countries like Myanmar, Facebook is the main social media platform which form information on the internet. Facebook, no doubt, is the accomplice of these ethnic armed organizations to spread hate and violence. From 2018 to 2019, Facebook incites many hates and violent content in Myanmar (Sablosky, 2021). These contents are extremely uncomfortable and caused users fear. For users, these contents are unacceptable, especially as Facebook dominates the dissemination of information in Myanmar. Facebook have the responsibility to protect their users from harmful contents, however, Facebook applied numerous resources in the countries in the top tiers.

Although Facebook put many resources to moderate the content for tier zero countries, it still disappointed their users. Facebook’s internal documents was reviewed by The Verge, and it showed in the late 2019, Facebook discussed how to apply moderation resources into the world ahead of several major elections (Gilbert, 2021). According to The Verge, the internal documents of Facebook showed that Facebook provide extra resources for 30 countries based on its tier system. Facebook’s protection of these countries is more obvious in times of political turmoil, helping governments eliminate negative content and ultimately leading to users’ lack of confidence in protecting their rights (Bromell, 2022).

During the Covid-19 pandemic, in 2021 April, India government ordered Facebook and other social media platforms to delete the post which criticize its handling of the virus (Bromell, 2022). Also, India government shut down the internet and mobile telecommunications in Kashmir, and blocked information sources since 2016 (Kaye, 2019). Facebook’s regulation of content moderation has been criticized by users as early as 2019. Although Facebook has facilitated dialogue between users and politicians, its content moderation is still unclear, and users cannot know whether their speech has been politically interfered (Kalsnes & Ihlebæk, 2021).
The basic digital right of users is free speech which users have the rights to post their opinions which do not violate content regulatory rules. However, Facebook delete user’s contents due to the pressure from India government. Content moderation should not be abused by the government for political purposes, on the contrary, it should help to build up an open and secure online site for users. Obviously, Facebook have the issue to overcome the fourth challenge, government is excessively use content moderation to achieve their political purposes which violate user’s digital rights at the same time. I understand it is impossible for government separate with social media platforms. However, social media companies should find the balance between platforms content moderation and governments to protect users.

Conclusion
It is very common nowadays for platform content moderation to be influenced by political factors. In addition to Facebook, Twitter and Instagram all faced the same condition. For example, Twitter was banned by Nigeria after it deleted president’s tweets (Bromell, 2022), and Uganda regulate online contents on several social media platforms to control dissent before the presidential election in 2021 (Ovide, 2021). However, platform content moderation is created to protect users and establish order and civilizations in the cyberspace, not for any other reasons.
Social media brings citizens and politicians closer, but excessive government interference in content moderation will lead to confusion in moderation rules and violation of user rights (Kalsnes & Ihlebæk, 2021). For social media companies, it is imperative to consider its responsibilities for platform content moderation, determine the regulatory strategy and divide the responsibility with government in content moderation. Social media companies should try to understand and differentiate between public and private communications, then define and construct a clear and understandable regulatory framework for content moderation. Policy makers who is responsible for construct regulatory framework should overcome and achieve the four challenges in content moderation. These four challenges ask policy makers to conduct content moderation in respect for four perspectives which are digital right, online security, commercial impacts, and business impacts. These four perspectives are the key to regulate online content for users and improve the online sites, it is also the key to help social media companies achieve the balance.
References
Barlow, J. (2019). A DECLARATION OF THE INDEPENDENCE OF CYBERSPACE. Duke Law and Technology Review, 18(1), 5–.
Bromell, D. (2022). Regulating Free Speech in a Digital Age : Hate, Harm and the Limits of Censorship. Cham: Springer International Publishing AG.
Doherty, B. (2004). John Perry Barlow 2.0: The Thomas Jefferson of cyberspace reinvents his body—And his politics. Reason. Retrieved from https://web.archive.org/web/20090903075735/http://www.reason.com/news/show/29236.html
Gilbert, B. (2021). Facebook ranks countries into tiers of importance for content moderation, with some nations getting little to no direct oversight, report says. Business Insider.
Kalsnes, B., & Ihlebæk, K. (2021). Hiding hate speech: political moderation on Facebook. Media, Culture & Society, 43(2), 326–342. https://doi.org/10.1177/0163443720957562
Kaye, D. (2019). Speech Police: The Global Struggle to Govern the Internet. La Vergne: Columbia Global Reports.
Ovide, S. (2021). What Internet Censorship Looks Like: on tech. New York Times.
Re-Humanizing Social Networking Platforms: The Importance Of Content Moderation. (2021). Business World.
Roberts, S. (2019). Behind the Screen: Content Moderation in the Shadows of Social Media. New Haven, CT: Yale University Press. https://doi.org/10.12987/9780300245318
Sablosky, J. (2021). Dangerous organizations: Facebook’s content moderation decisions and ethnic visibility in Myanmar. Media, Culture & Society, 43(6), 1017–1042. https://doi.org/10.1177/0163443720987751
Wei, C., Khoury, R., & Fong, S. (2012). Web 2.0 Recommendation service by multi-collaborative filtering trust network algorithm. Information Systems Frontiers, 15(4), 533–551. https://doi.org/10.1007/s10796-012-9377-6