Introduction
Freedom of expression and speech are basic constitutional rights that for decades, have been allowing people to express themselves without any fear. However, in the 21st century, such freedoms guaranteed are being misused by certain groups, specifically in the realm of the Internet. Speech is now frequently used to convey, evoke, and express hatred against specific groups and is known as Hate Speech (Parekh, 2012). Hate Speech targets different groups in society based on their race, gender, ethnicity, sexual orientation, and so on (Parekh, 2012). That is, Hate Speech targets certain marginalised groups in society and makes them the objects that continuously remain at the receiving end of hatred and intimidation. Internet and social media platforms also play a crucial role in moderating content that can be categorised as Hate Speech; whereby, sometimes these platforms reinforce and even amplify speech that is plagued with hostility and hatred (Matamoros-Fernández & Farkas, 2021). This blog will explain the concept of Hate Speech in the age of the internet and social media. This blog will put forward the idea that Hate Speech further marginalises already marginalised groups in society by making them objects of hostility and the platforms fail to correct this even though they have a responsibility towards the groups who are at end of receiving Hate Speech.
Hate Speech

(Source: (Hänel, 2022))
Hate Speech has become a major subject of concern in recent times because it not only destroys the online environment but also puts certain groups at a perpetual disadvantage in the online realm. Hate Speech happens to be immensely derogatory against certain groups that can incite violent acts against these groups; however, simultaneously, Hate Speech might not always lead to violence (Castano-Pulgarín & Suárez-Betancur, 2021). Sometimes Hate Speech can simply create an ambience of intolerance, discrimination, prejudice, and hostility against certain groups which, in turn, ridicules these groups based on their ethnicity, sexualities, gender, race, caste, religion, and so on (Castano-Pulgarín & Suárez-Betancur, 2021). In simple terms, Hate Speech is not violent, per se, but it can produce an environment that is conducive to violence, hate, and hostility against certain groups. Production of an environment of hate and prejudice by Hate Speech then establishes the idea that some groups in the society deserve to be hated and discriminated against because of their certain social identities.
Different research studies have then also studied the way Hate Speech targets particular groups based on their social identities. One of the studies has shown that racial minority groups are often the victims of online hate speech; wherein, they are dehumanised by dominant groups who through Hate Speech defend and validate their racist worldviews (Bliuc & Faulkner, 2018). Another study demonstrated that men of colour frequently become victims of Hate Speech in the online gaming environment and over time, they have learnt to remain silent in the face of repeated Hate Speech instead of confronting it (Ortiz, 2019). LGBT communities also receive constant hate in the online realm where Hate Speech stigmatises these groups and normalises negative behaviour towards them (Ștefăniță & Buf, 2021). Hate speech against the LGBT community then often makes the LGBT people blame themselves for their victimisation while questioning their existence (Costello & Rukus, 2018). These studies indicate that online Hate Speech then does not allow certain groups to live with dignity and respect as they are constantly humiliated. Online Hate Speech silences the agency of minority groups while reinforcing the dominant discourses of society. The minority groups who have then historically oppressed and silenced continue to get oppressed and silenced by Hate Speech in the online realm; wherein, harmful attitudes towards them become the norm and they are compelled to tolerate it.
Cases: Adam Goodes and Anti-Asian Hate Speech

(Source: (Perrie, 2019))
The case of Adam Goodes provides an effective insight into how marginalised groups are victimised by Hate Speech in the online realm. Adam Goodes is an Indigenous Australian football player who is 2015 after doing a war dance to celebrate his goal in Australian Football League’s Indigenous round in 2015 (Matamoros-Fernández, 2017). On social media platforms like Facebook and Twitter, Goodes was humiliated, ridiculed, and vilified (Matamoros-Fernández, 2017). He was booed; whereby, even though there were no explicit mentions of his race, comments like “Go back to Papua Guinea” did have racial implications (Coram, 2016). It has been contended that the vilification and booing of Goodes on social media platforms reflect how Goodes was treated like property without any rights that did not belong to Australia and most of such booing came from a place of immense social privilege (Coram, 2016). The case of Adam Goodes then implies that racial minorities not only become subjects of Online Hate Speech but through means of Hate Speech, they are also established as the ‘other’ who do not belong within the dominant discourse of whiteness. Hate Speech in the online realm then refuses the minority groups any rights and reduces them to people who deserve to be treated in a hostile manner because they happen to be different from most people.

(Source: (CGTN, 2021))
The case of Anti-Asian Hate Speech in the United States in the context of the COVID-19 pandemic also provides insight into how Hate Speech in the online realm operates. President Donald Trump, in one of his tweets, called the COVID virus the Chinese virus which resulted in a series of hateful and prejudiced comments against Asian on various social media platforms (Kim & Kesari, 2021). The Hate Speech speakers and spreaders then believed that is the responsibility of the Chinese people to pay for the virus that is spread by them (Kim & Kesari, 2021). Comments like “These oriental devils don’t care about human life” and “Chinks will bring about the downfall of Western civilisation”, regularly appeared on social media (Vidgen, et al., 2020). Some of the tweets containing Hate Speech even equalised Asian people with non-humans such as insects and viruses (Vidgen, et al., 2020). The case of the Anti-Asian Hate Speech in the online sphere indicates that Hate Speech creates an environment of prejudice against specific groups who are not only ‘otherised’ by such speech but are also portrayed as people who are not worthy of basic human respect and dignity. Asians have been historically discriminated against in the US and the Hate Speech targeting them during the COVID-19 pandemic further legitimises their discrimination and establishes the idea that Asians should be viewed with contempt and treated with hostility.
The Role of Internet Platforms
Online Hate Speech takes place through the medium of platforms and so Internet Platforms have a crucial role to play in the phenomenon. Social media platforms like Twitter have become the topmost choice of every politician and journalist; however, a platform that is believed to be advocating free speech in society has become a place for the proliferation of abuse and hate speech (Konikoff, 2021). Based on Twitter’s policies that just warn the users against content that is harmful and offensive while mentioning that it cannot take monitor or be held responsible for such content, rather prohibiting offensive content, studies have suggested that Twitter reinforces hateful content and abuse (Konikoff, 2021). Moreover, Facebook’s regulatory policies; whereby, it removes or keeps harmful content without any explanation or in conformity with a country’s legislation on removing undesirable content online then allow for the circulation of discriminatory Hate Speech on the platform (Ben-David & Matamoros-Fernandez, 2016). Facebook’s technological affordances also support Hate Speech where links to hateful and extremist content may not be detected as Hate speech (Ben-David & Matamoros-Fernandez, 2016). Facebook creates filter bubbles; wherein, if a person is a racist, they are likely to come across more racist content (Ben-David & Matamoros-Fernandez, 2016). These suggest that online platforms because of their weak policies on content regulation and technological features then often prioritise restoration of offensive content over the protection of their users from speech that might be hateful, prejudiced, and violent.
In the case of Adam Goodes, the issue of platformed racism has been explored. Twitter, Facebook, and YouTube allowed vilification of Goodes through Hate Speech by letting it disguise as humour; whereby, hateful content against the Indigenous player was shared in terms of memes (Matamoros-Fernández, 2017). The liking and sharing feature also amplified racist Hate Speech on these platforms as some of the content containing hateful speech was liked and shared thousand times making them legitimate (Matamoros-Fernández, 2017). Algorithms also played a crucial role here as liking one page or image related to racist discourse on Goodes, suggested numerous other pages on the same issue (Matamoros-Fernández, 2017). On Twitter, tweet and delete strategies were also allowed for Hate Speech against Goodes where users tweeted Hate Speech and later deleted it (Matamoros-Fernández, 2017). Twitter did not take action against such Hate Speech speakers because the screenshots of the deleted posts were not admissible as valid evidence by the platform (Matamoros-Fernández, 2017). In simple terms, the hateful content moderation and regulation on online platforms is increasingly flawed; whereby, racism and racist discourses are covertly promoted as a means of humour or legitimised and reinforced on these platforms rather than being challenged and removed. The Hate Speech spreaders then find a convenient medium through which they continue to perpetuate hate as they hardly face the danger of being removed or prohibited from the online platforms.
The Anti-Asian Hate Speech that circulated on social media platforms was also not properly regulated or monitored by online platforms. Social media platforms have a supervised machine learning tactic; whereby, a classifier is trained to detect content that can be considered Hate Speech (Lee & Li, 2020). However, during the COVID-19 pandemic words like ‘Chinese Virus’ and ‘Kung Flu’ were not detected by the classifier and there is an increasing need to include evolving racist and xenophobic language to train the classifier (Lee & Li, 2020). This then has important implications implies because of the lack of advanced technological tools that can quickly detect and moderate hateful content, social media platforms continue to promote and retain speech that is prejudiced and discriminatory. The lack of sophisticated policies that can moderate Hate Speech, a technological affordance that amplifies Hate Speech against minority groups, and the absence of advanced technological features then indicate that online platforms need to proactively invest resources into building a safe environment that protects its vulnerable users from getting victimised by the phenomenon of Hate Speech.
Conclusion
Hence, in conclusion, the presence and perpetuation of Hate Speech in the online environment not only targets the most vulnerable groups of society but also further marginalises them by reinforcing and amplifying discourses that are particularly harmful to them. Such Online Hate Speech strips the minority groups from their rights and freedoms and restricts them from living a life of dignity and respect. Online platforms like Twitter and Facebook also play a crucial role in the amplification of Hate Speech discourses because of their lack of proper content regulation and moderation policies, and technological affordances and designs. To build a safe online environment that does not promote Hate Speech, online platforms must look inwards and make amendments to their operations.
References
Ben-David, A., & Matamoros-Fernandez, A. (2016). Hate Speech and Covert Discrimination on Social Media: Monitoring the Facebook Pages of Extreme-Right Political Parties in Spain. International Journal of Communication, 10, 1167-1193.
Bliuc, A.-M., & Faulkner, N. (2018). Online networks of racial hate: A systematic review of 10 years of research on cyber-racism. Computers in Human Behavior, 87, 75-86.
Castano-Pulgarín, S. A., & Suárez-Betancur, N. (2021). Internet, social media and online hate speech. Systematic review. Aggression and Violent Behavior, 58(6), 1-7.
CGTN. (2021). Another season of hate towards Asian Americans in the U.S. Retrieved from https://news.cgtn.com/news/2021-04-05/Another-season-of-hate-towards-Asian-Americans-in-the-U-S–ZdhKpmBYly/index.html
Coram, S. (2016). ‘Alchemy’ of Rights and Racism: A Critical Reading of the booing of Adam Goodes from Papua New Guinea. Journal of Australian Indigenous Issues, 19(4), 42-57.
Costello, M., & Rukus, J. (2018). We don’t like your type around here: Regional and residential differences in exposure to online hate material targeting sexuality. Deviant Behaviour, 49(3), 385-401.
Hänel, L. (2022). Germany’s battle against online hate speech. Retrieved https://www.dw.com/en/germanys-battle-against-online-hate-speech/a-60613294
Joshi, S. (2019). Why regulating social media will not solve online hate speech. Retrieved from https://www.orfonline.org/research/why-regulating-social-media-will-not-solve-online-hate-speech-54490/
Kim, J. Y., & Kesari, A. (2021). Misinformation and Hate Speech: The Case of Anti-Asian Hate Speech During the COVID-19 Pandemic. Journal of Online Trust and Safety, 1(1), 1-14.
Konikoff, D. (2021). Gatekeepers of toxicity: Reconceptualizing Twitter’s abuse and hate speech policies. Policy & Internet, 13(4), 502-521.
Lee, R. K.-w., & Li, Z. (2020). Online Xenophobic Behavior Amid the COVID-19 Pandemic: A Commentary. Digital Government: Research and Practice, 2(1), 1-5.
Matamoros-Fernández, A. (2017). Platformed racism: the mediation and circulation of an Australian race-based controversy on Twitter, Facebook and YouTube. Communication & Society, 20(6), 930-946.
Matamoros-Fernández, A., & Farkas, J. (2021). Racism, Hate Speech, and Social Media: A Systematic Review and Critique. Television & New Media, 22(2), 205-221.
Ortiz, S. M. (2019). “You Can Say I Got Desensitized to It”: How Men of Color Cope with Everyday Racism in Online Gaming. Sociological Perspectives, 62(4), 572-588.
Parekh, B. (2012). Is there a case for banning hate speech? In M. Herz, & P. Malner (Eds.), The Content and Context of Hate Speech: Rethinking Regulation and Responses (pp. 37-56). Cambridge: Cambridge University Press.
Perrie, S. (2019, May 30). Trailer For Documentary About The Racism Adam Goodes Copped In AFL Is Here. Retrieved from https://www.ladbible.com/entertainment/film-and-tv-trailer-for-documentary-about-racism-adam-goodes-copped-in-afl-drops-20190530
Ștefăniță, O., & Buf, D.-M. (2021). Hate Speech in Social Media and Its Effects on the LGBT Community: A Review of the Current Research. Romanian Journal of Communication and Public Relations, 23(1), 47-55.
Vidgen, B., Botelho, A., Broniatowski, D., Guest, E., Hall, M., Margetts, H., . . . Hale, S. (2020). Detecting East Asian Prejudice on Social Media. Proceedings of the Fourth Workshop on Online Abuse and Harms (pp. 1-12). Association for Computational Linguistics.