Facebook’s Regulation on Online Hate Speech

Featured image retrieved from https://www.axios.com/hate-speech-online-soars-after-george-floyds-death-463871bb-dfe4-4b0c-becc-e59cdcfb6336.html

Introduction

With the rise of the Internet and the development of Information Communication Technology (ICT), a virtual space is created for action, interaction, and exchange of information to take place (Weber, 2010). Social networks have brought together people around the world and promoted an efficient way of communication. Social Network Sites (SNSs) like Facebook, Twitter and Instagram, allow people to connect with others as well as share ideas, experiences and information. There are around 2.9 billion Facebook users and 2.2 billion YouTube users (Statista, 2022). 

Yet, SNSs can also be an ideal place for the proliferation of harmful content (Del Vigna et al., 2016). Hate speech is always a major concern, even before the creation of the Internet, and it has been amplified through the growing social media platform (Flew, 2021). Research has shown that during the pandemic, online hate speech has grown by 20%. We also have seen several cases of online hate speech, for example, the London mayor shares racist hate speech videos on Facebook. Moreover, online extremist narratives can lead to real-world events that put people’s lives in danger (Castaño-Pulgarín et al., 2021), such as the massacre in Christchurch which was live-streamed through Facebook.

 

The Internet and social media platform 

The rise of online hate speech is closely related to the rise and development of the Internet. The Internet builds up an ideal environment for individuals and extremists to express and promote hate, because of its anonymity, mobility and immediacy (Banks, 2010). That is to say, users can not be traced by what they have said or done on the Internet. Just like you randomly hit someone and run away into the crowd, and that person is not going to find out who did it. Moreover, the social media platform creates a space that allows for social interaction and expression of selves (Alkiviadou, 2018). At the same time, enables like-minded people to form online hate groups and sites in order to perform hate-related activities online (Banks, 2010). On this almost completely unregulated depersonalised social network, people have trouble controlling their aggression and do not hesitate to use negative sharp language or even hate speech. 

 

Freedom of speech and hate speech: where is the red line? 

So, you might ask, where is the red line between freedom of speech and hate speech? Unfortunately, no one can answer this question, as the boundary between hate speech and freedom of speech is blurry. 

While freedom of speech is protected by international law, Article 19 of the International Covenant on Civil and Political Rights (1966) states that “everyone shall have the right to freedom of expression”. Hate speech, on the other hand, has no agreed definition by any law. Yet, it often challenges the limits of free speech and human dignity (Chetty & Alathur, 2018). 

 

Definition of Hate speech: What is hate speech?

There is no international legal definition of “hate speech”. This is because of the different interpretations of free speech across regions and it is also controversial and hard to define what is harmful or evil (United Nations, 2019; Alkiviadou, 2019). People have their own points of view on their beliefs and values, like Thanos in the Avengers, he got his reason to wipe out half of all life, which is to stop the rapid consumption of resources and ensure a brighter future for the rest. He views himself as a good guy, a visionary saviour, even though the entire universe is fighting against him. 

Figure 1. Thanos snapping finger, retrieved from https://www.denofgeek.com/movies/thanos-snap-avengers-endgame/

To continue, we need to start on the same page and agree on the same definition. In general, hate speech is directed at individuals or specific (often ignorant) groups to demonstrate an opposite behaviour (Chetty & Alathur, 2018; Flew, 2021). It can be more destructive when used against a traditional symbol, event or activity and in relation to an individual’s nation, race, religion, gender and other personal identities (Chetty & Alathur, 2018). Hence, we can define hate speech as any kind of speech that targets an individual or a group and intent to hurt or disrespect them based on identity (Chetty & Alathur, 2018; Castaño-Pulgarín et al., 2021).

 

Facebook as a social media platform

As the global Internet has grown, calls for better monitoring of social media platforms have also increased (Kalsnes & Ihlebæk, 2021). Companies are beginning to establish content regulation rules to address the issue of hate speech. Facebook (now known as Meta), as the most popular social media platform, has received negative attention in its decision-making process due to its lack of restraint and transparency (Kalsnes & Ihlebæk, 2021).

Figure 2. Logo of Meta, retrieved from https://www.theapplepost.com/2021/10/28/facebook-changes-name-to-meta/

Regulation of hate speech by social networks: The Code of Conduct

The European Commission had signed The Code of Conduct with IT Companies (Facebook, Microsoft, YouTube and Twitter) in 2016. It aimed to stop the proliferation of racist and xenophobic hate speech online. They are asked to respond to users’ requests to remove illegal hate speech content in less than 24 hours. 

Currently, Facebook has over 15,000 human content moderators globally to review users’ posts on daily bases (Meta, 2022). According to the Meta Community Standards (2022), hate speech is been defined as “a direct attack on people” based on their “race, ethnicity, national origin, disability, religious affiliation, caste, sexual orientation, sex, gender identity and serious disease”. However, given that the regulatory process highly relies on personal decision-making by content reviewers, Facebook users and reviewers have often raised controversy over how the regulations are enforced (Guo & Johnson, 2020).

 

Facebook user’s perspectives on the regulation

How do we, as users, respond to the hate content on Facebook? To report a post or comment that makes us feel uncomfortable or angry, we need to go through the following steps as shown below. Firstly, we need to click on the three dots in the top right corner of the post. Then a list of options will come up (as seen in the screenshots below). Interestingly, the option of “Report post” does not come up first, whereas “Hide post”, “Snooze the page for 30 days” and “Unfollow the page” are positioned at the top. We also need to “select a problem” from the list. 

Figure 3. Screenshot from Facebook

The simple sequence of reporting a post indicates two things: 

  1. Facebook has assigned the task to the user. Now, we are responsible for identifying and judging a piece of content and also determining the basis of the report is warranted.
  2. Facebook’s relative unwillingness to give priority to content deletion (Siapera & Viejo-Otero, 2021).

That is to say, the platform is not intent on managing and taking out the harmful content itself. Rather, they prioritise individual solutions and ask us to self-regulate the feeds by hiding or unfollowing the page. 

 

Third-person Effect

I believe that we are all familiar with the process of reporting, but how many of us will actually do it? More importantly, are we able to correctly identify and categorise these harmful contents? 

Surprisingly, although Facebook gives users the freedom to report, they are less likely to flag hate speech (Guo & Johnson, 2020). And this is related to the third-person effect (TPE). A situation where humans live in an illusion that we are better than others. To put it in the context of social media platforms, people tend to think that biassed influence from the media will not affect “me” or “you”, but rather on “them” – the third person (Guo & Johnson, 2020). 

For example, I saw some negative news and I think that my ability to judge if it’s good or bad is better than others. In this way, we overestimate our ability to make a judgement on the Internet. And hence, while platforms try to make changes, to tackle this, they meet the arrogance of human nature, the TPE. Therefore, the regulations to mitigate hate speech online are not very effective and helpful.

 

Facebook’s double standards: allowing hate speech toward Russians

Figure 4. Vladimir Putin and Mark Zuckerberg, retrieved from https://www.euronews.com/next/2022/03/25/as-russia-bans-facebook-and-instagram-what-alternatives-will-russian-social-media-users-tu

On March 10 2021, Reuters saw some internal emails saying that Meta Platforms made a temporary change to its hate speech policy. The company will allow Facebook and Instagram users in selected countries to call for violence against Russians and Russian soldiers. Some posts even call for the death of Russian President Vladimir Putin and Belarusian President Alexander Lukashenko. Meta has said that the political expressions that would normally violate the rules will be allowed, whereas the calls for violence against civilians will not be permitted. Moreover, Meta also allows praise of a Ukrainian neo-Nazi paramilitary group, which was previously prohibited.

In response, Russia’s embassy demanded: “stop the extremist activities of Meta and take measures to bring the perpetrator to justice” (Reuters, 2022). At the same time, Russia blocked access to Facebook and Twitter. The Internet is turned into a battlefield without flame and smoke. And the social media platform is used as a propaganda machine for the United States and its allies. This role has become hard to conceal as the media have inflamed anti-Russian hysteria. 

Over the past few years, social media platforms have been working on regulating harmful content and incitement to enhance a healthy online environment. This was also the rationale behind the unprecedented decision by the tech giants to ban Donald Trump from their platforms at the start of 2021.

Figure 5. Donald Trump being banned by social media platforms, retrieved from https://www.washingtonpost.com/technology/2021/01/11/trump-banned-social-media/

However, now the policy against hate speech is diluted to one group only, the Russians. This action is likely to further fuel an already tangible atmosphere between Russia and other countries. The double standards reveal the fact that social media platforms only support speeches that respond to Western positions. 

Yet, this is not the first time double standards appear. During the 2019 Hong Kong riots, Facebook and Twitter had suspended and removed accounts. On the one hand, they claimed their action is blocking “a state-backed Chinese misinformation campaign” in order to “sow political discord in Hong Kong” (Inocencio, 2019). On the other hand, they were ignoring the serious damage done by the rioter to the public in the physical real world. Moreover, the platforms were spreading misinformation to the audience and intended to stir up hatred and violence toward the Chinese government. The truth is that these accounts are just normal personal accounts of Chinese citizens. They spontaneously post on the platform to call for stopping violence against the rioters and try to introduce Chinese policy to people around the world.

Another example can be seen through the Black Lives Matter protest in 2020. Louiza Doran, an anti-racism activist, has received notification from Facebook and Instagram, saying that any post that is relevant to her is flagged and removed because it violates the Community Standards. Facebook also wrongly flags and takes down discussion posts about racism or white supremacy, claiming any censorship was “mistakes, and they were certainly not intentional”.

I would like to pose some questions to the IT Companies about the Code of Conduct. I believe that the policy should apply to everyone equally. If they have the power to change the standards whatever and whenever they like in order to fit their interest, what is the point of setting it up in the first place? Although social media platforms are responsible for protecting and ensuring users’ safety, who are they really protecting?

 

Conclusion

Overall, as the Internet continues to develop, issues of online hate speech have arisen and intensified the need to be regulated on social media platforms. The line between hate speech and freedom of speech is blurry. The definition of “hate speech” is hard to unify across nations, due to a lack of discriminative features as well as cultural differences (Castaño-Pulgarín et al, 2021).

Regulations and policies need to be set up and enforced to address the issue of online hate speech raised by the developing digital technology. The Code of Conduct signed between the European Commission and the IT Companies is a good starting point to regulate hate on SNS. We can see the effort they made to address the issues of hate speech and build a better place for communication. However, taking Facebook as an example, there’re still questions that arise around the enforcement of the Code. Firstly, it heavily relies on the user’s willingness and knowledge, as the platform only needs to take action after this kind of harmful content has been reported (Alkiviadou, 2019). Secondly, users tend to not identify and flag hate speech due to the TPE (Guo & Johnson, 2020). 

Moreover, Facebook itself is violating its own Community Standard and is presented to be a double standard. As seen through the recent news of the change in policy for hate speech toward Russian, as well as their inconsistent action and attitude towards the Hong Kong riot and Black Lives Matter.

Therefore, greater transparency on how policies are set out and enforced by the IT Companies is required by the public audience. More importantly, we, as Internet citizens, should learn to recognize harmful content and make efforts to counteract them. If you see others spreading unacceptable content, do not pass it by. Be sure to report it on the spot.

 

 

Reference

Alkiviadou, N., (2019). Hate speech on social media networks: towards a regulatory framework?. Information & Communications Technology Law, 28(1), 19-35. https://doi.org/10.1080/13600834.2018.1494417

Banks, J. (2010) Regulating hate speech online. International Review of Law, Computers & Technology, 24:3, 233-239. https://doi.org/10.1080/13600869.2010.522323

Castaño-Pulgarín, S. A., Suárez-Betancur, N., Vega, L. M. T., & López, H. M. H. (2021). Internet, social media and online hate speech. Systematic review. Aggression and Violent Behavior, 58, 101608–. https://doi.org/10.1016/j.avb.2021.101608

Del Vigna, F., Cimino, A., Dell’Orletta, F., Petrocchi, M., & Tesconi, M. (2017). Hate me, hate me not: Hate speech detection on Facebook. In A. Armando, R. Baldoni, & R. Focardi (Eds.), Proceedings of the First Italian Conference on Cybersecurity(ITASECI7) (pp. 86-95). Venice: CEUR.

Flew, T. (2021). Regulating platforms. Cambridge, UK: Polity.

Guo, L., & Johnson, B. G. (2020). Third-Person Effect and Hate Speech Censorship on Facebook. Social Media + Society, 1-12. https://doi.org/10.1177/2056305120923003

Kalsnes, B., & Ihlebæk, K. A. (2021). Hiding hate speech: political moderation on Facebook. Media, Culture & Society, 43(2), 326–342. https://doi.org/10.1177/0163443720957562

 Siapera, E., & Viejo-Otero, P. (2021). Governing Hate: Facebook and Digital Racism. Television & New Media, 22(2), 112–130. https://doi.org/10.1177/1527476420982232

United Nations. (1966). ​​International Covenant on Civil and Political Rights. Retrieved from https://www.ohchr.org/en/instruments-mechanisms/instruments/international-covenant-civil-and-political-rights

United Nations. (2019). United Nations Strategy and Plan of Action on Hate Speech. Retrieved from https://www.un.org/en/genocideprevention/hate-speech-strategy.shtml 

Weber, R. (2010). Shaping Internet Governance: Regulatory Challenges. Cham: Springer.