Online hate speech on social media platforms

 

Introduction

Global communication, politics, and culture are changing as a result of the development of the Internet. At the present time, the Internet has had a profound impact on people’s lives, changing the way they access news, communicate with friends and many other aspects of their lives. People are able to share opinions, ideas, and information through social media and the Internet as a new platform to create images, videos, art, music, and more with family, friends, and even online friends. People’s lives have been facilitated by the internet. Despite this, there is a proliferation of hate speech on the Internet due to the diversity of internet media.

The term hate speech refers to speech that is biased, discriminatory, and hateful toward individuals or groups based on group identity characteristics such as race, nationality, gender, or religion, guided by hateful intentions. There are a number of factors that can result in hate speech not only causing harm to the group or individual but also causing negative effects on society. It is urgent to address the issue of hate speech on the Internet due to the close links between cyberspace and real space and its increasing complexity.

Numerous websites incite hatred and negativity on the Internet. This includes public statements and websites that incite hatred against a particular racial group, ethnic group, or religious belief. Insensitive remarks about immigrants and races have been attacked and linked to violent acts, leading to resentment against hate speech. Increasingly, according to Ariadna (2017), social media platforms are being scrutinized by the public due to a lack of consistency in the way they apply their policies to cultural differences and hate speech. 

The purpose of this blog is to discuss the emergence of hate speech on the media platform. To discuss this issue, this blog will use two case studies: racism hate speech on YouTube, and hate speech about COVID-19 on Twitter, in order to demonstrate the need to resist hate speech on the internet and for media platforms to regulate such behavior. Moreover, as Van (2013) points out, social media platforms have been used for both pro-social and anti-social purposes as current mediators of the majority of online socialization and creativity. It is essential that social media platforms regulate hate speech more effectively.

 

 

Racism hate speech on YouTube

The majority of the world’s population now uses social media to communicate. YouTube is being watched by 210 million people in the US alone by 2022. In addition, YouTube is the second largest search engine after Google. According to GMI (2022), more than 2.3 billion people worldwide use YouTube once a month. The development of social media and the social and political climate in virtually every country around the world is resulting in an increase in the prevalence of racist practices, both old and new. It has been demonstrated in recent years that there is a wide range of racist hate speech on social media, whether explicit or covert, including the use of false identities in an attempt to incite racist hatred (Farkas et al., 2018). The use of social media to incite violence against minorities and individuals has become more common, and many of these attacks are directed at radicals, journalists, and individuals who dare speak out against the disturbing majority views expressed on social media. A number of social media platforms, such as YouTube, Facebook, and Twitter, are also frequent sources of hate speech with racial, ethnic, or religious overtones. There has been an increase in the quantity and quality of hate content that has been posted on web 2.0 sites and social media sites over the past few years, which is in line with the proliferation of hate sites. An analysis of data collected from a study indicates that hate speech is rapidly spreading on social media, particularly in certain parts of the United States. In recent years, it has become prevalent in the southern region of the United States for people to use words such as “nigger” or “faggot” in videos and tweets (Ring, 2013).

YouTube offers a valuable case study because of its high volume of user-generated content that features hate speech. These kinds of hate speech related to race, which stigmatizes minorities, creates racial discrimination, and may even result in ethnic conflict, should be strictly prohibited. In fact, YouTube is rapidly becoming a host of videos that express anti-Semitism, misogyny, and homophobia (Will, 2008). In particular, there are several channels on YouTube dedicated to sharing racist ideas about minorities. There are some channels associated with specific groups, whereas others serve as virtual gathering places where hateful content can be uploaded and shared by individuals. 

 

Hammerskin Nation’s Logo (Samira, 2021)

For instance, Hammerskin Nation is a white supremacist group. Hammerskin belongs to a group of people who are known for boasting about their white pride, readiness for violence, and overall social dominance at the expense of others through music videos and songs (ADL, 2002). Additionally, YouTube’s comments section frequently contains hate speech in addition to the channels and content that advertise it. Due to the low posting thresholds and anonymity of YouTube, extreme right-leaning groups have been using this platform to spread hate (Sureka, Kumar-aguru, Goyal, & Chhabra, 2010).

Therefore, the number of videos and comments that contain racial hate speech on YouTube has decreased over time, and YouTube is gradually regulating the content on its platform. Representatives from across the political spectrum met with YouTube executives. As a platform, YouTube prohibits hate speech and harassment, and if the content violates these policies, the site removes it promptly (YouTube, n.d.). YouTube will also begin asking users to provide more personal information, such as gender, sexual orientation, and race, in order to gain insight into the influence of various creators and viewers. This information will also be used by YouTube to identify patterns of hate speech, harassment, and discrimination.

 

 

COVID-19 hate speech on Twitter

The COVID-19 pandemic is spreading at a rapid pace and many local authorities have begun to tighten the perimeters of many cities through social distancing orders in order to slow down the spread of the virus. As a result, social norms and everyday life have changed significantly, and social media platforms such as Twitter have become a central point for exchanging information concerning the virus. Although people live at a distance, the information they share through social media has been influential during this global health and information crisis (Xie et al., 2020). However, there are some posts and comments posted on the Internet that seem to reflect and even amplify the stigma, discrimination, and blatant hatred that is prevalent in contemporary society, especially against China and its people. According to Agarwal and Chowdary (2020), hate speech increased by 900% following the outbreak of COVID-19. In addition, the study found that there was a 70% increase in the number of youths who were being impacted by hate speech between December 2019 and June 2020. In this respect the use of digital media has influenced and shaped people’s social nature, communication, actions, and lifestyles (McLuhan, 1964).

With Twitter as a social media platform, users have the option to customize their online identities. The anonymity of Twitter, the lack of self-awareness in groups, and the inhibitions it promotes make it very conducive to various forms of content (Festinger, Pepitone, & Newcomb, 1952). As a consequence, there have been many occurrences on Twitter of racist hashtags being used in connection with the new crown pneumonia, including #kungflu, #chinesevirus, #CommunistVirus. The majority of this hate speech targets people of Asian descent in China and other parts of the world. Many foreign and Chinese citizens around the world have protested against reports of racist hate speech aired in Western media. As a result, several media outlets have apologized.

 

Tom Cotton’s hate speech. (Screenshoot, 2022)

 

Many Western politicians have also sought to exploit the China outbreak as a political opportunity to stir up anti-China sentiment in addition to the discriminatory hate speech coverage of the new Coronavirus in Western media. As an example, US Republican Senator Tom Cotton, an anti-China Republican, has implied that the virus could have been a biological weapon, leaked from a laboratory in Wuhan. He requested that the US immediately seal off China and that all Americans leave the country. As part of the Covid-19, hate speech was used as a political tool. In response to an epidemic, the United States cannot create prejudice against one group of people and condone racism or hate speech (Ami, 2020).

The integrity of digital platforms has long been threatened by hate speech. The identification of various types of hate speech has progressed significantly over the years, however, popular computing approaches tend to isolate it from the community context within which it takes place (Uyheng & Carley, 2021). On the other hand, in response to the proliferation of hate speech on Twitter, Twitter has started regulating hate speech. In the profile picture or profile title, people are prohibited from using images or symbols that are hateful. In addition, it is not allowed to use usernames, display names, or profile identities in abusive ways, such as the use of abusive behavior to target individuals or group members or to express hatred of individuals, groups, or protected groups (Twitter, n.d.).

 

 

Conclusion:

With the advancement of digital media, it is possible to share online speech and content anonymously without fearing any repercussions. Moreover, unlike traditional media, the editorial process is overseen by those who come before the publication. Typically, traditional media outlets place effective restrictions on hate speech, a mechanism that does not apply to social media platforms for the self-publishing of content. There is a tremendous amount of data generated by social media platforms every day. Users interact with the content posted on these platforms in large numbers. The social regulations and protocols that these platforms have imposed have not been sufficient to prevent some of the offensive posts with hateful content from being transmitted. People tend to be more aggressive in this environment due to the anonymity offered by social media platforms (Burnap and Williams, 2015).

 

 

 

 

Reference list:

ADL. ( n.d.). The Hammerskin Nation. Retrieved April 02, 2022, from https://www.adl.org/education/resources/profiles/hammerskin-nation/

Agarwal, S., & Chowdary, C. R. (2021). Combating hate speech using an adaptive ensemble learning model with a case study on COVID-19. Expert Systems with Applications185, 115632. https://doi.org/10.1016/j.eswa.2021.115632

Ariadna Matamoros-Fernández. (2017). Platformed racism: the mediation and circulation of an Australian race-based controversy on Twitter, Facebook and YouTube, Information. Communication & Society, 20(6), 930-946. DOI: 10.1080/1369118X.2017.1293130

Burnap, P., & Williams, M. L. (2015). Cyber hate speech on twitter: An application of machine classification and statistical modeling for policy and decision making. Policy & internet7(2), 223-242. https://doi.org/10.1002/poi3.85

Council of Europe. (n.d.). Online hate speech. Retrieved March 31, 2022, from https://www.coe.int/en/web/cyberviolence/online-hate-speech

Farkas, J., Schou, J., & Neumayer, C. (2018). “Cloaked Facebook Pages: Exploring Fake Islamist Propaganda in Social Media.” New Media & Society, 20 (5): 1850–67.https://doi.org/10.1177/1461444817707759

Ring, C. E. (2013). Hate speech in social media: An exploration of the problem and its proposed solutions (Doctoral dissertation, University of Colorado at Boulder). Retrieved from https://www.proquest.com/openview/d06f852a103503a16c23d4bdd60f1848/1?pq-origsite=gscholar&cbl=18750

Samira, A. (2021). “HAMMERSKINS” – DIE TERROR-BRUDERSCHAFT IM UNTERGRUND. Retrieved April 06, 2022, from https://www.belltower.news/neonazi-netzwerk-hammerskins-die-terror-bruderschaft-im-untergrund-120519/

Twitter Help.(n.d.). Hateful conduct policy. Retrieved April 06, 2022, from https://help.twitter.com/en/rules-and-policies/hateful-conduct-policy

Uyheng, J., & Carley, K. M. (2021). Characterizing network dynamics of online hate communities around the COVID-19 pandemic. Applied Network Science6(1), 1-21.https://doi.org/10.1007/s41109-021-00362-x

YouTube. (n.d.).How does YouTube protect the community from hate and harassment? Retrieved April 02, 2022, from https://www.youtube.com/intl/en_be/howyoutubeworks/our-commitments/standing-up-to-hate/