How Social Media Platforms Contribute to the Spread of Online Hate Speech: Taking 8Chan as An Example

Wall shouting (Germain, 2018). https://unsplash.com/photos/UdB_8NYVAdg

This article will mainly discuss the issue of online hate speech governance on social media platforms. I will use the practice of 8Chan as an example and analyse how 8Chan has failed to respond, while its anonymity, community culture, platform nature, and policy contribute to online hate speech.

With the development of new media technology, online social media has become an increasingly popular tool for people to communicate with each other. However, social media has become more active, and users have flocked to it. As a result, they have gradually become a breeding ground for some online hate speech. For example, the anti-Muslim, anti-Islamic hashtag stopIslam has been popular on Twitter since the attack in Brussels six years ago (Poole, Giraud, & de Quincey, 2021). This kind of online hate speech can be targeted at individuals or a more vulnerable group. It conveys or provokes hatred towards an individual or group by attacking one or multiple characteristics of them (e.g., sexual orientation, beliefs, etc.), and it is inflammatory (Parekh, 2012). In the current online world and on social platforms, the reach of online hate speech is expanding at an unconventional rate (Flew, 2021). Therefore, the phenomenon of online hate speech in the new era calls for better governance of social media platforms. In fact, some social media platforms not only fail to respond well to online hate speech but even contribute to it. This article will mainly discuss the issue of online hate speech governance on social media platforms. I will use the practice of 8Chan as an example and analyse how 8Chan has failed to respond, while its anonymity, community culture, platform nature, and policy contribute to online hate speech.

                   Figure 1. Editorial, work from home (Heath, 2018).

What Happened?

Jury Reaches Verdict: Life in Prison for Poway Synagogue Gunman (Ojeda, 2021)

In April 2019, a shooting took place in a synagogue in the United States of California. The incident caused the death of a woman and the wounding of three others. The attacker of the incident was a student named John Earnest, who was only 19 and was later arrested by police. John Earnest had made an inappropriate manifesto about racism on a forum called 8Chan prior to the shooting (Fisogni, 2020). In the manifesto, Earnest viewed Jews as a sinful race who treated other races as slaves and orchestrated the genocide of Europe. At the same time, Earnest demonstrated a tendency towards white supremacy (Earnest, as cited in Nilsson, 2022). Although the reasons for a person’s decision to carry out a killing plan can be multifaceted, there is evidence that Earnest’s motives for carrying out the shootings are directly linked to the 8Chan forum. Earnest had a harmonious and loving family, which may be different from other shooters that committed crimes. But before committing the crime, he spent a lot of time on the 8Chan forum, which is dominated by an extreme far-right crowd and white supremacists. When interacting with the forum users, Earnest became infected with this extreme environment. This laid the groundwork for his subsequent massacre. This was also reflected in Earnest’s manifesto, where Earnest saw the massacre as a form of glory, a way to gain fame among his fellow 8Chan users (Cullings, 2020). In fact, according to Carlson and Frazer (2018), as an online social media platform, 8Chan can expand certain negative and extreme statements, especially for younger groups like Earnest. As a result, it can have a relatively greater and negative psychological or ideological impact on younger groups.


United States Policy on Speech: The First Amendment

This article argues that 8Chan’s failed management of online hate speech has contributed to the murder. But before discussing why 8Chan failed to stop the horrific rhetoric on the platform and stopped Earnest in time, it is necessary to examine the regulations and politics of the United States.

 Figure 2. Freedom of speech (Winkler, 2020).

First comes the U.S. regulations on hate speech. According to scholars, unlike other countries around the world, there are no clear regulations on hate speech in the United States. Even the U.S. First Amendment gives protection to hate speech (Demaske, 2020a). Also, under the First Amendment, intervention or interference with hate speech is only necessary when it causes direct harm or clearly demonstrates harm to a particular crew of people or individuals (Asafo, 2021). The reasons for the First Amendment’s protection of hate speech have something to do with the traditional American notion of freedom of speech. For example, historically, it has been suggested by American scholars that free speech is the highest level of freedom and that it can only be regulated under extreme circumstances (Demaske, 2020b). These scholars emphasise that freedom of expression is a prerequisite for the proper operation of government and stress the supremacy of freedom of expression and the need for people’s rights to hate speech to be defended (Demaske, 2020c). Therefore, in terms of legal regulation at the national level, the United States, where synagogue bloodshed took place, lacks regulation of hate speech. As a result, in the case of 8Chan, it is difficult to assess whether Earnest’s manifesto of slaughter required intervention. Because it is difficult to determine whether Earnest’s manifesto caused immediate harm under First Amendment law. This line of judgement is unclear. Secondly, other academics have pointed out that it is more difficult to differentiate between hate speech and legitimate political advocacy in the western political context. Regular political advocacy has a wider scope, with race, immigration issues, and even radical right-wing ideas could be counted as political topics (Barendt, 2019). Therefore, if analysed in this way, Earnest’s inappropriately racist statements on the 8Chan forum could be considered “political advocacy” rather than hate speech.

Therefore, hate speech is not clearly defined and, to some extent, protected by regulations at the national level in America. This is where the pressure and responsibility for hate speech governance fall on platform 8Chan itself. Unfortunately, as a result, 8Chan has not only failed to successfully govern hate speech, but its own anonymity, community culture, platform nature, and policies have even fostered extreme racist groups like Earnest. Therefore, it has contributed to online hate speech. This phenomenon can also be called platform racism (Matatoros-Fernandez, 2017).


How 8Chan Failed to Deal With Online Hate Speech?

Figure 3. Person wearing a mask sitting on chair while using a computer (Miroshnichenko, 2020).

Platform racism is the phenomenon that online social media networks promote racism through its mechanisms, technology, culture, policies, and other factors (Matatoros-Fernandez, 2017). To take the example of 8Chan, the first point to mention is the anonymity of 8Chan. As an extension of Chan’s online social forum series, 8Chan uses a comprehensive anonymity system. This is one of the features of the Chan series forums (Ludemann, 2018). It means that users can post their thoughts and comments on the forum as they like without revealing their real information. This relatively free and loosely anonymous online environment provides a place for people like Ernest with extreme rhetoric to communicate freely and facilitates the spread of online hate speech such as extreme racism (Lavi, 2020). Hate speakers can communicate with each other through social media platforms like 8Chan to share their opinions, post extreme rhetoric and even post information about offline rallies without fear of exposing their own identities. Similarly, what contributes to online hate speech is the 8Chan community culture. According to Baele, Brace, & Coan (2020), 8Chan is derived from the online forum 4chan, which is filled with violence, pornography, and other extreme content. 8Chan, in turn, inherited this extreme environment from 4chan. Consequently, on a community cultural level, the 8Chan platform itself potentially encourages the posting of extreme speech. This makes it possible for groups like Earnest to freely post hate speech online.

The third point that needs to be mentioned is the platform nature of 8Chan itself. As an online social media platform, 8Chan offers a variety of formats for users to post messages. Users can simply post with text, but can also add videos, memes, hyperlinks, etc., to their posts. The variety of posting formats – especially memes – allows hate speech on 8Chan to be republished on various other platforms quickly, such as YouTube. And it reduces the likelihood of being detected by the regulated algorithm. And then, through the algorithms of other platforms, such as reposts, likes, search engines, and other platform mechanisms, some extreme tweets are pushed to potentially extremist groups, accelerating the spread of hate speech (Rauf, 2020). Other scholars have made similar arguments. The communication of extremist groups that publish hate speech is not limited to one new media platform. The spread of hate speech can span multiple platforms (Bryant, 2020). This could create an invisible channel of communication between social media networks, such as 8Chan and YouTube, which contributes to the proliferation of hate speech online.

  Figure 4. A man using YouTube (Viktor, 2018).

One final point to mention is 8Chan’s speech restriction policy. According to Marique and Marique (2020), the founders of 8Chan have sought to make 8Chan a platform where users’ speech is absolutely protected. This means that when Earnest makes a racial speech or attack plan on 8Chan, he may not have to worry too much about his post being blocked, and his post may not disappear from 8Chan in the first place. This is because Earnest’s racial comments about Jews can be counted as his freedom of speech and protected by 8Chan. This lenient policy, combined with the anonymity, extreme community culture, and 8Chan’s own social media properties, has facilitated the spread of hate speech within platforms.

It is also worth noting that social media platforms such as 8Chan are one of the tools rather than the sole determinant in facilitating the spread of hate speech online. Gill et al., (2017) stress that the reasons for the escalation of extreme speech and behaviour need to be analysed on a case-by-case basis, depending on the ultimate aims, ideology, etc., of the communicators.


How Could Social Media Platforms Deal With Online Hate Speech?

So how could social media platforms respond to the problem of online hate speech? Some academics offer different answers. Donovan (2019) argues that, firstly, the censorship of hate speech should be strictly regulated at the national level. Secondly, as the main place where online hate speech occurs, social media platforms should take responsibility for dealing with online hate speech. For example, delete posts and block access to users publishing extreme speech. While Abderrouaf and Oussalah (2019) argue that the detection of online hate speech is very important. There is a need for better training in hate speech detection tools to improve their content recognition abilities. But Klompmaker (2019) offers a different perspective on the regulation of online content. Klompmaker is concerned that overly restrictive restrictions on extreme speech on social media platforms could lead to a shift in the communication of hate speakers to more obscure platforms. At the same time, this infringes on the opportunity for hate speakers to speak out.

However, I personally disagree with this view. This blog argues that online hate speech is inherently and potentially offensive to a person or group of people and can cause harm to people using the Earnest incident. Therefore, it should be restricted at its root. While overly restrictive policies can have adverse effects, online hate speech should not be left unregulated. And in terms of concrete ways to do this, clear policies at the national and platform levels are important, as is the upgrading of hate speech detection tools. But more importantly, I think, is the adoption of different mechanisms for dealing with different social media platforms depending on their nature. For example, in the case of 8Chan in the Earnest massacre, it is a more hidden and niche platform compared to new media platforms such as YouTube and Twitter. Therefore, more stringent and specific online hate speech response strategies may be required.


Conclusion

In general, this blog uses 8Chan as a starting point for discussing the role of 8Chan anonymity, community culture, platform attributes, and policies in promoting online hate speech. And in doing so, this blog explores possible ways for social media platforms to respond to online hate speech. I also argue that online hate speech needs to be strictly limited, and specific strategies need to be adapted for different social media platforms.

However, this article does not give specific actions on dealing with online hate speech according to the different countries, cultures, and forms of social media platforms. Therefore, this may be a topic that remains to be answered.


Reference

Abderrouaf, C., & Oussalah, M. (2019). On Online Hate Speech Detection. Effects of Negated Data Construction. 2019 IEEE International Conference on Big Data (Big Data), 5595–5602. IEEE. https://doi.org/10.1109/BigData47090.2019.9006336

Asafo, D. (2021). Confronting the lies that protect racist hate speech: Towards Honest Hate Speech Laws in New Zealand and the United States. UCLA Pacific Basin Law Journal, 38(1), 1–. https://doi.org/10.5070/P838153630

Baele, S. J., Brace, L., & Coan, T. G. (2020). The “tarrant effect”: what impact did far-right attacks have on the 8chan forum? Behavioral Sciences of Terrorism and Political Aggression, 1–23. https://doi.org/10.1080/19434472.2020.1862274

Barendt, E. (2019). What Is the Harm of Hate Speech? Ethical Theory and Moral Practice, 22(3), 539–553. https://doi.org/10.1007/s10677-019-10002-0

Bryant, L. V. (2020). The YouTube Algorithm and the Alt-Right Filter Bubble. Open Information Science, 4(1), 85–90. https://doi.org/10.1515/opis-2020-0007

Carlson, B. & Frazer, R. (2018) Social Media Mob: Being Indigenous Online. Sydney: Macquarie University. https://researchers.mq.edu.au/en/publications/social-media-mob-being-indigenous-online

Cullings, F. (2020). Alt-Right Influence on the Radicalization of White Nationalists in the United States, According to Significance Quest Theory (Master’s thesis, Naval Postgraduate School, United States. Retrieved from https://apps.dtic.mil/sti/pdfs/AD1114619.pdf

Demaske, C. (2020a). Favoring free speech: The U.S. response. In Free Speech and Hate Speech in the United States: The Limits of Toleration. Retrieved from https://doi-org.ezproxy.library.sydney.edu.au/10.4324/9781003046851

Demaske, C. (2020b). First amendment theories: Arguments and counter arguments. In Free Speech and Hate Speech in the United States: The Limits of Toleration. Retrieved from https://doi-org.ezproxy.library.sydney.edu.au/10.4324/9781003046851

Demaske, C. (2020c). Introduction. In Free Speech and Hate Speech in the United States: The Limits of Toleration. Retrieved from https://doi-org.ezproxy.library.sydney.edu.au/10.4324/9781003046851

Donovan, J. (2019). Navigating the Tech Stack: When, Where and How Should We Moderate Content? Retrieved 5 April 2022, from https://www.cigionline.org/articles/navigating-tech-stack-when-where-and-how-should-we-moderate-content/

Fisogni, P. (2020). Extremism, manifestos and the contagion of evil: The new wave of terrorism in the online world. In M. E. Korstanje (Ed.), Allegories of a never-ending War. Retrieved from https://d1wqtxts1xzle7.cloudfront.net/63360740/book_ALLEGORIES_OF_A_NEVER_ENDING_WAR20200519-44607-y3qasp-with-cover-page-v2.pdf?Expires=1649155938&Signature=WMaHHmf8EwcAi1zy1Edaky1h1hj2H3uYuz5Saa6768hDneISjiYMD-eWdCTBxkY5PzlrU3ZteqvRKUpxat3LAXocsvqcsJQVSAxvsVls4kB~CMrqlZum6YoNO3NbnxplGDxe~3wXuC3zXQJyKYDihMCeIm6xr-RFZuiu-fLB~uaJrgogN2FNt~CEku2-pOa22M7n3epdSZQDa6G9uFnZ2HMCz~479un3jdGoWjAHQlXjBK87AaFu6MbrwmsUKuDM2cJDyBKyaEsr~3vE5b8JTDln2MGLheQ06obVc7qgMIIqfS3Fy2nBwP4mZTvwvoRqLlwXaBuB11TkOg4roa1-9w__&Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA#page=72

Flew, T. (2021). Regulating Platforms. Cambridge, UK: Polity Press.

Gill, P., Corner, E., Conway, M., Thornton, A., Bloom, M., & Horgan, J. (2017). Terrorist Use of the Internet by the Numbers: Quantifying Behaviors, Patterns, and Processes. Criminology & Public Policy, 16(1), 99–117. https://doi.org/10.1111/1745-9133.12249

Klompmaker, N. (2019). Censor Them at Any Cost? A Social and Legal Assessment of Enhanced Action Against Terrorist Content Online. Amsterdam Law Forum, 11(3), 3–. https://doi.org/10.37974/ALF.336

Lavi, M. (2020). DO PLATFORMS KILL? Harvard Journal of Law and Public Policy, 43(2), 477–573.

Ludemann, D. (2018). pol/emics: Ambiguity, scales, and digital discourse on 4chan. Discourse, Context & Media, 24, 92–98. https://doi.org/10.1016/j.dcm.2018.01.010

Marique, E., & Marique, Y. (2020). Sanctions on digital platforms: Balancing proportionality in a modern public square. The Computer Law and Security Report, 36, 105372–. https://doi.org/10.1016/j.clsr.2019.105372

Matatoros-Fernandez, A. (2017). Platformed racism: the mediation and circulation of an Australian race-based controversy on Twitter, Facebook and YouTube, Information, Communication & Society 20(6), pp. 930-946.

Nilsson, P.-E. (2022). Manifestos of White Nationalist Ethno-Soldiers. Critical Research on Religion. https://doi.org/10.1177/20503032211044426

Parekh, B. (2012). Is There a Case for Banning Hate Speech? In The Content and Context of Hate Speech (pp. 37–56). Cambridge University Press. https://doi.org/10.1017/CBO9781139042871.006

Poole, E., Giraud, E. H., & de Quincey, E. (2021). Tactical interventions in online hate speech: The case of #stopIslam. New Media & Society, 23(6), 1415–1442. https://doi.org/10.1177/1461444820903319

Rauf, A. A. (2020). New Moralities for New Media? Assessing the Role of Social Media in Acts of Terror and Providing Points of Deliberation for Business Ethics. Journal of Business Ethics, 170(2), 229–251. https://doi.org/10.1007/s10551-020-04635-w