Introduction
Social media sites have already become the major tool to communicate with each other and spread the proposition for ordinary people and organizations in this era. They not only create a re-connection platform but also set the rules for participation. There is no such thing as a “neutral” platform; all sites make choices in their technical design and rules that shape the particular type of content people can publish, review and share.
Generally, the made decisions usually have a strong impact since we are too reliant on the main social media platforms such as Facebook, Twitter and Instagram (Suzor, 2019).
However, the content could go to the review process by the employees, the agent of the firm or the community moderator if it is being reported or flagged by users who disagree with it or believe it is inappropriate.
Significantly, moderation work can be mentally challenging and dangerous.
Sometimes, the moderator needs to make a decision about removing content or not less than ten seconds with low payment and repetition. It can also gradually destroy the soul since the moderator is dealing with some of the worst content on the internet, including abusive and harassing posts, violent videos or images and content of sexual abuse against children.
In addition, the challenge of moderating massive daily content means that norms and guidelines need to be made very specifically if the moderator is expected to make consistent and quick decisions (Suzor, 2019).
Moreover, even with the AI moderation help, it also raises concerns about the “black box” of algorithmic content distribution and data management (Flew, 2019). Also, algorithmic governance not only benefits the organization’s financial welfare gain by increasing economic circulation and reducing complexity but also causes potential social risks that compromise considerable welfare gains (Just & Latzer, 2017).
This post will focus on platform content moderation in both human and smart machine moderators.
Firstly, the consideration of potential moderation bias and how it is affected because of the political environment will be examined with an example.
Secondly, it will introduce PornHub was sued by 40 Girls to porn sex trafficking victims in 2020 as a showcase regarding the human moderation vulnerability and its risk to wellbeing.
Finally, the pros and cons of AI moderation will be discussed and will mainly dig into the downside, and Chinese leading social media platforms’ current moderation will be presented as an example.
Platform Moderation Bias – Facebook’s Lastest Rules & Policy
Currently, Vengattil and Culliford (2022) reported that Meta, as the parent company of Facebook and Instagram, allows violent content posting on both platforms against Russia’s soldiers, citizens and Putin.
In other words, Zuckerberg is picking a side within this war by privately allowing their users to bully Russia.
This latest policy change was mentioned by Meta executives in an internal email to the review department, which indicates specific adjustments: hate speech restrictions will be lifted for Facebook and Instagram users from certain regions, including Russia, Ukraine, Poland, Latvia, Lithuania, Estonia, Slovenia, Hungary and Romania.
While the content could be extremely violent and contain the uncivilized phrase, posting hate speech is approved against Russian President Vladimir Putin, Belarusian President Lukashenko, Russian soldiers, Russian citizens as long as their users in the mentioned are (Daily Mail, 2022).

Vengattil and Culliford (2022) further revealed that the Meta HQ emails not only supported cyber-violence against Russia but also allowed their users to glorify the far-right Ukrainian militant, neo-Nazi Azov battalion for the first time. It has been explicitly prohibited on all major social media platforms in the past.
This is the first time that a leading global social media company has publicly supported online bullying and hate speech.
Instagram clearly points out the community policy: “We will never allow incitement to violence or personal attacks against people based on their race, ethnicity, nationality of origin, biological sex, gender, gender identity, sexual orientation, religious beliefs, disability or illness”. Likewise, Facebook rules state that “We define hate speech as direct verbal attacks against protected characteristics of others, rather than ideas or customs, including ethnicity, race, nationality of origin, disability, religious beliefs, caste, sexual orientation, gender, gender identity, and serious illness” (Meta, 2022).

However, a double standard that is specifically authorised by Meta HQ has been introduced in certain regions.
No matter how radical the posts or comments are, internet violence against Putin, Lukashenko, Russian soldiers and citizens is now allowed (Vengattil & Culliford, 2022).
It is hard for the firms that owned the social media platforms (Suzor, 2019). On the contrary, the community guideline will also be changed along with the international situation varies rather than staying absolutely neural. It is also difficult for us as users to identify whether these changes are more positive or negative. Perhaps the most important thing is that there is an external institution to review and audit the moderation policy.
PornHub Case – Moderation Vulnerability and Risks to Employee Wellbeing
PornHub, as the leader of pornographic websites, was caught in the whirlpool on the public recently. 78% of the videos on the site were removed and the uploading content from non-authenticated accounts was suspended.
It is reported that 40 women filed a lawsuit against PornHub who are the victims of GirlsDoPorn, which tricked women into making porn in the name of advertising and uploading them on PornHub to make money. However, PornHub ignored them and let it continue as a business partner (Daily Mail, 2020).

Serval former employees came out and reveal the strange internal operating environment. According to the former staff, there is no clear standard for PornHub’s content review as it aims to publish more videos and make more money from them (BBC News, 2020).
The moderation team would use lots of weird logic to justify the perverted video for achieving this goal.
For instance, if the video title says “son and his mother”, the video would not be approved as incest content is prohibited. However, the video could go online if titled “mother fxcks son” because it does not make 100% sure that the two are relative and the “mother” could be someone else’s mother.
More incomprehensibly, it is possible to show dismember in the video while dismembered will be banned because dismembered means the person must be dead but dismember is unsure.
It is also possible to torture and kill animals, but it depends on which animal. Small animals like crickets and lobsters can be trampled to death in the video, but not goldfish. The reason is that some people would keep goldfish as pets meanwhile no one keeps crickets. Their job was basically to find weird excuses to keep videos on the site (Daily Mail, 2020).
Many filming has to be stopped due to the epidemic and criticism from many media. Hence, the employees in charge of filming had nothing to do and were assigned by the company to other departments, including the video review department.
The staff took the job after only two training sessions and had to force review 400 videos per day.
Employees across multiple teams had to watch 400 pornographic videos a day and professional censors reviewed 1200 a day at the beginning of March. It became mandatory which was originally voluntary work.
The company will also send people to monitor. They could be deducted points or even fired if they do not watch enough porn videos. According to an ex-employee, many colleagues suffered mental breakdowns and panic attacks, or could not bear to resign because of seeing too many pornographic videos every day and the perverted content (Daily Mail, 2020).
Three former employees believe that the reason for the violent censorship is entire because the previous basic work was not done well. Whether it is reporting suspicious videos to government authorities or providing psychological counselling to censorship employees, these are long overdue for PornHub.
But as the leader of the porn industry, PornHub is just too busy making money and failed to build a proper mechanism in the workplace.
More importantly, the signs of MindGeek (parent company of PornHub) are too suspicious. MindGeek is a monopoly and most of the content it provides is pirated, there are also sexual assault scandals, and the whole system is invisible.
Why the local government did not check it? According to Kate Isaacs (founder of NGO NotYourPorn), it is because of the stigma around pornography. Quote from Isaacs, “No politician wants to talk about the porn industry because then they have to admit that porn is a part of everyday life.” (Global News Canda, 2021)

However, the turmoil of Pornhub can no longer be ignored by politicians.
A Liberal MP for Canada introduced a motion requiring top executives from MindGeek and Pornhub to appear before a public ethics committee to respond to allegations of rape videos and child pornography (Global News Canada, 2021).
AI Moderation – Negative Effects by Using Algorithms Governance
Any speeches and content on the internet could be extremely complicated because of the complexity of society nowadays. “The content of postings is likely to be emotive, which complicates its differentiation from regular abuse or deviant behaviour” (Bruckman, Curtis, Figallo & Laurel, 1994).
Hence, moderation as a governance mechanism benefits from forming participation and engagement of users.
Human and unhuman moderators are playing a significant role which demanded to prevent unwanted behaviours such as discrimination, defamation and hate speech and improving cooperation (Grimmelmann, 2015).
Currently, the hybrid of post-moderation (moderators examine content within the site and identify it many contravene rules or policies) and reactive moderation (moderators review the content to see if it is against the guidance and then take further action, such as removing, warning or banning) is the most common approach in social media platform (Paech, 2022).
Manipulation, heteronomy, bias, threats to privacy, freedom of expression and intellectual property rights and some risks that have been identified as risks in terms of governance of algorithms. Significantly, three categories of risk are obviously: the influence on the mediation of reality, the threats to basic rights and liberties, and the challenges to the future development of human beings (Just & Latzer, 2017).
Conclusion
In conclusion, with the rapid development of social media platforms and technology, the public not only enjoys the convenience social media brings but also suffers the unpredictability of the governance and moderation the platform adds.
Significantly, the moderation rules would unavoidably be changed due to the internal and external environment changes and development. Furthermore, the moderation process is becoming a burden for the moderator team, particularly for the not well-organized business.
In such cases, lack of mental help, service, support or assistance is a working factor that staff is unable to ignore and their urgent needs. More importantly, moderation is turning into the key element that determines the user experience on social media to a certain degree.
Content removing and account banning without proper or convincing reasons by either the human moderator or AI are the least wanted results for internet users. The phenomenon could be improved if all the moderators receive more positive treatment, including well-training, higher payment, and regulation of AI moderation use.
References
Bruckman, A., Curtis, P., Figallo, C., & Laurel, B. (1994). Approaches to Managing Deviant Behavior in Virtual Communities. In Conference Companion on Human Factors in Computing System (CHI ’94). ACM, New Your, NY, USA.
Grimmelmann, J. (2015). The virtues of moderation. Yale JL & Tech. 17 (2015), 42.
Facebook and Instagram ‘will ALLOW posts calling for Putin’s death and violence against Russian soldiers and violence against Russian soldiers and civilians’ in a temporary change to its hate speech policy. (2022, March 11). Daily Mail UK, Retrieved from https://www.dailymail.co.uk/news/article- 10600831/Facebook-Instagram-ALLOW-posts-calling-Putins- death.html
Flew, T. (2019). Platforms on Trial. Intermedia, 46(2), 18-23.
Hans News Service. (2021). Instagram Down: Errors in Messages as Global Outage Affects Million Users [Image]. Retrieved from https://www.thehansindia.com/technology/tech-news/instagram-down-errors-in-messages-as-global-outage- affects-million-users-704654
Just, N., & Latzer, M. (2017). Governance by algorithms: reality construction by algorithmic selection on the Internet. Media, Culture & Society, 39(2), 238–258.
Latzer, M (2013c) Media Convergence. In: Towse, R, Handke, C (eds) Handbook of the Digital Creative Economy. Cheltenham: Edward Elgar, 123–133.
Legal gaps, lack of enforcement revealed in Pornhub policies around exploitive videos. (2021, February 28). Global News Canada. Retrieved from https://globalnews.ca/news/7668018/pornhub-policies-lack-enforcement/
Meta. (2022). Community Guidelines. Retrieved from March 14, 2022, from https://help.instagram.com/477434105621119
Meta. (2022). Facebook Community Standards. Retrieved from March 14, 2022, from https://transparency.fb.com/policies/community-standards/
‘Our job was to find weird excuses not to remove them’: PornHub moderators, who watched 1,200 videos A DAY, reveal lenient guidelines at the site being sued for $80m for ‘profiting from sex trafficking’. (2020, December 18). Daily Mail Australia. Retrieved from https://www.dailymail.co.uk/news/article-9065059/Ex-PornHub-moderators-reveal-life- inside-explicit-video-site-sued-80m.html
Paech, V. (n.d.). MECO6942_Intensive Workbook Jan 2022
Pornhub sued by 40 Girls Do Porn sex trafficking victims. (2020, December 16). BBC News. Retrieved from https://www.bbc.com/news/technology-55333403
Ruvic, D. (2020). A Facebook logo is displayed on a smartphone in this illustration taken January 6, 2020 [Image]. Reuters. Retrieved from https://www.reuters.com/world/europe/exclusive-facebook-instagram-temporarily-allow-calls-violence- against-russians-2022-03-10/
Suzor, P. (2019). Lawless: The Secret Rules That Govern Our Digital Lives. Cambridge University Press.
Tripplaar, K. (2019). A logo sign outside of the headquarters of MindGeek in Montreal, Quebec, Canada, on April 21, 2019 [Image]. Retrieved from https://www.alamy.com/a-logo-sign-outside-of-the-headquarters-of-mindgeek-in-montreal- quebec-canada-on-april-21-2019-image247814955.html
Vengattil, M., & Culliford, E. (2022, March 11). Facebook allows war posts urging violence against Russian invaders. Reuters. Retrieved from https://www.reuters.com/world/europe/exclusive-facebook-instagram-temporarily-allow-calls- violence-against-russians-2022-03-10/