Why online hate speech cannot be entirely curbed?

With the popularization of the Internet and the rapid development of social media, people’s access to information has become more convenient (Chetty & Alathur, 2018). However, the convenience brought by the digital age has indirectly led to some groups wantonly spreading online hate speech (Konikoff, 2021). Online hate speech generally takes the form of posts written by some online users to discriminate and incite others through malicious language, and the content of these posts usually involves gender, race, sexual orientation, and disability (Konikoff, 2021). And these groups are able to vent and spread hatred through the use of various social media platforms, more due to the anonymity and low publishing thresholds of these platforms (Matamoros-Fernández, 2017).

 

What lead to hate speech?

The anonymity of the users in online platforms contributes to abusive speech and behaviors partially due to its lack of strict regulation on account registration. In order to allow users to be both candid and entertained, the community website Reddit allows users to create accounts anonymously without being restricted by the mandatory real-name system (Massanari, 2017). Besides, it is very easy for users to create one or more accounts on Reddit (Massanari, 2017). Although Reddit is entitled to banning accounts that publish potential malicious content, users whose accounts have been banned can still create their sub-accounts to communicate on Reddit (Massanari, 2017). It facilitates some malicious users to keep sending offensive posts and messages anonymously, which can only be resolved by the intervention of the administrator (Massanari, 2017).

 

In my opinion, the anonymity promoted by these social networking sites and platforms is one of the important reasons for the increase of users’ online abuse (Matamoros-Fernández, 2017). Through these platforms, people can communicate when hiding behind their screen without seeing each other face-to-face (López & López, 2017). And when people have to talk to each other face-to-face, the content of the conversation is different from when their genuine identity is hidden by a screen (López & López, 2017). This is because there is no cost to faceless conversations, so it encourages people to spread malicious remarks indiscriminately (López & López, 2017). López & López (2017) emphasized that these malicious users took advantage of the anonymity of social platforms to freely express extreme opinions and insults. Moreover, different social platforms such as Twitter have also been abused by malicious users who, on the other hand, are able to cover up their behavior due to their anonymity policies, resulting in more users being targeted and attacked (Konikoff, 2021). It can be seen that the policy setting of social media sites and platforms has a considerable relationship with the shaping of online hate speech (Matamoros-Fernández & Farkas, 2021).

 

In addition to the anonymity of social media platforms, the low threshold for posting also creates a convenient channel for the dissemination of online hate speech. There is growing public opinion demanding that social media platforms change and improve their content moderation (Matamoros-Fernández, 2017). Content moderation is an algorithmic system of the platform that analyzes user-created content for prediction and matching to determine whether to retain user-published content or block user accounts (Gorwa et al., 2020). According to Matamoros-Fernández (2017), despite content moderation in various platforms, there still exist malicious users abusing platform functions, spreading hate speech, and creating content. This implies that the platform has either a low threshold or a loophole for content moderation, which these users take advantage of.

 

To counteract the dissemination of malicious content, Twitter has taken the double-safety measure of having users and moderators jointly review and monitor content (Konikoff, 2021). However, content moderation has always been a process that is notoriously secretive and opaque to users. The platform’s moderation mechanism reports and manages controversial content with limitations, without clearly showing and discussing why some content is undesirable (Matamoros-Fernández, 2017). For example, in an infamous case, an American female game developer Zoe was unfairly tweeted by her ex-boyfriend that she allegedly had sex with a reporter in exchange for a positive review of the game (Romano, 2021). This incident caused many malicious users to abuse Zoe on the Internet and search for her home address (Romano, 2021). Although social platforms warn the users of this behavior, the system has certain limitations in understanding which speech should be cracked down, which leads to the need for some victimized women to constantly explain and provide evidence to protect themselves from human flesh searches and real-world violence (Romano, 2021).

Zoe Quinn gives a speech at the 2017 BookExpo in New York City.

 

What lead to anonymity and low threshold for posting?

Social media platforms somewhat resist online hate speech, but for their own economic interests, their control over online hate speech is still not strict enough. The reason why platform policies are being exploited by more malicious users is that social media platform policies are more geared towards broad audiences than protecting potential victims (Massanari, 2017). Massanari (2017) also clarifies that the platform’s governance methods and policies support the continued existence of this malicious behavior and culture. For example, the social networking site Reddit is very dependent on harvesting traffic and profits from the content posted by users for free, so the platform will not conduct more boycotts when malicious speech posts are published (Massanari, 2017). Moreover, platforms also have policies that allows algorithms to track users’ behavioral activities and understand people’s preferences to increase users’ usage, which also makes malicious posts recommended by algorithms to more users for browsing and discussion (Matamoros-Fernández, 2017).

 

Secondly, the managers of social media platforms do not interfere with content and intrusive practices. According to Massanari (2017), the platform will not interfere with other user-generated content except for the basic regulation of sharing pornographic images, interfering with the operation of websites, and disseminating personal information. Regardless of the impact of hate speech, administrators are reluctant to target and alienate users (Massanari, 2017). Because once the administrator comes forward, the platform will lose a certain amount of revenue and pageviews (Massanari, 2017). It is worth noting that the money that users subscribe to content for a few days can be used to run the entire platform website (Massanari, 2017). And we can also consider that how serious the loss of these platforms will be if the administrator offends more users by controlling the content.

 

Thirdly, the managers of social media platforms delegate responsibility to individual users, which indirectly contributes to online hate speech. Although administrators have scruples about financial interests, they believe that a hands-off approach to content disputes is more about safeguarding the interests of the majority of users (Massanari, 2017). Therefore, social platforms such as Twitter assign some of the responsibilities of moderators to the public, allowing them to decide whether to delete or retain the content of posts (Konikoff, 2021). Twitter’s policy can only give relative punishments when users report hate speech, which has also led many users to use the rights of administrators to escape punishment and spread more malicious speech (Konikoff, 2021). However, this series of actions by Twitter actually represents that the administrators and the platform are essentially trying to absolve themselves of their responsibilities. The policies of social platforms make users only seek help from administrators, and if the administrators are not responsible for the content control, the information of malicious speech on the Internet cannot be processed (Massanari, 2017).

 

 

How to eliminate hate speech regarding racial and gender discrimination?

Racial and gender discrimination is the source of the spread of online hate speech. And social platforms can only reduce the occurrence of such discrimination by addressing prejudice (Matamoros-Fernández, 2017). Nextdoor, a social networking site where users can interact with their neighbors, has incorporated a design into the site to combat discrimination (Matamoros-Fernández, 2017). This design can display a banner holding users accountable for racial exclusion before malicious users post content containing discriminatory posts (Matamoros-Fernández, 2017). This can indeed arouse the conscience of some users, thereby reducing discrimination in hate speech. But banner answers are subjective, which can also lead to some abusers taking advantage of them by deliberately faking the answers to distort the statistical factors related to algorithms.

 

Moreover, many social media platforms have policies to prohibit and regulate racial and gender discrimination, but these policies can also become an umbrella for abusers to post hate speech (Matamoros-Fernández, 2017). Social platforms like Facebook use country-specific blocking features, but it does not explain in detail what kind of posts are malicious views or humorous, which has also led some users to post discrimination hate in the form of humor speech is not punished by platforms (Matamoros-Fernández, 2017). But the indirect protection of humorous discriminatory speech is problematic (Matamoros-Fernández, 2017). Abusers discover that they can post hate speech in this way without being regulated, which has also contributed to the proliferation of this type of speech online (Matamoros-Fernández, 2017).

 

How to raise the threshold for content publishing on social platforms?

In my opinion, raising the publishing threshold of social media platforms not only requires platforms to formulate more detailed and clear rules for content review of hate speech, but also needs to make the algorithm process of content review more transparent to allow users to have a deep understanding of what content is moderate (Matamoros-Fernández, 2017).

 

The social media platform Twitter uses sensitive media filters to help users filter out posts with sensitive content before browsing and sharing (Matamoros-Fernández, 2017). However, when a hate speech post is liked and shared by a large number of users, this affects the platform algorithm’s judgment of content moderation (Matamoros-Fernández, 2017). Although filters have contributed to cybersecurity, the unclear regulation system also provides a more convenient channel for abusers to commit crimes (Gorwa et al., 2020). Therefore, social media platforms should put more effort into perfecting the loopholes in filters and making clear rules.

 

Besides, when platforms process and delete posts with hate speech content, they do not explain to users why the content should not be used, and this moderation process is opaque to users (Matamoros-Fernández, 2017). Nextdoor offers us an insightful resolution to this issue. In order to improve this problem, Nextdoor attach reasons for users to delete the information when deleting posts, so as to better raise people’s awareness of hate speech and make content moderation transparent (Matamoros-Fernández, 2017). Generally speaking, I consider that increasing transparency is not a panacea, but it is very useful for social media platform to reducing online hate speech (Gorwa et al., 2020).

 

How to strike a balance between anonymity and freedom of speech?

The anonymity of social media platforms can actually help users avoid the effects of gender and racial discrimination to a certain extent because users are able to escape from gender and racial attack by choosing to be anonymous. (Matamoros-Fernández, 2017). However, because social platforms promote people’s freedom of speech on the Internet, some criminals will use anonymity to escape responsibility and spread hate speech more blatantly (Konikoff, 2021). Although victims have been attacked by malicious remarks, social platforms will not change anonymity to real-name system. This is because freedom of speech can only be achieved on the basis of anonymity, so more social platforms evade responsibility and leave the choice of freedom of speech or user security to users (Konikoff, 2021). Therefore, I believe that social media platforms should find a more balanced approach in protecting users and freedom of speech, by improving their content gates to reduce online hate speech and free users from the constraints of speech restrictions (Konikoff, 2021).

 

Conclusion

This blog provides us with a future research direction and inspiration by discussing the relationship between online hate speech and social media platforms. It prompts us to seek the balance between user security and freedom of speech by improving the loopholes in filters, clarifying the rules for content review, and increasing the transparency of the review process, thereby reducing the spread of online hate speech.

 

 

Reference

Chetty, N., & Alathur, S. (2018). Hate speech review in the context of online social networks. Aggression and Violent Behavior, 40, 108–118. https://doi.org/10.1016/j.avb.2018.05.003

Drozdova, A. (n.d.).[Image].Gettyimages. Retrieved from https://www.gettyimages.com/detail/illustration/the-guy-is-discussed-and-insulted-on-social-royalty-free-illustration/1167181868?adppopup=true

Gorwa, R., Binns, R., & Katzenbach, C. (2020). Algorithmic content moderation: Technical and political challenges in the automation of platform governance. Big Data & Society, 7(1). https://doi.org/10.1177/2053951719897945

Konikoff, D. (2021). Gatekeepers of toxicity: Reconceptualizing Twitter’s abuse and hate speech policies. Policy & Internet. https://doi.org/10.1002/poi3.265

Lamparski, J. (2017).[Photo].Gettyimages. Retrieved from https://www.gettyimages.com.au/detail/news-photo/zoe-quinn-speaks-during-the-first-amendmant-resistance-news-photo/691236712

López, C. A., & López, R. M. (2017). Hate speech, cyberbullying and online anonymity. In H. Aristar-Dry, & D. Springs (Ed.), Online hate speech in the European Union: A discourse-analytic perspective (pp. 80-83). Springer International Publishing. https://doi.org/10.1007/978-3-319-72604-5

Massanari, A. (2017). #Gamergate and The Fappening: How Reddit’s algorithm, governance, and culture support toxic technocultures. New Media & Society, 19(3), 329–346. https://doi.org/10.1177/1461444815608807

Matamoros-Fernández, A. (2017). Platformed racism: the mediation and circulation of an Australian race-based controversy on Twitter, Facebook and YouTube. Information, Communication & Society, 20(6), 930–946. DOI:10.1080/1369118X.2017.1293130

Matamoros-Fernández, A., & Farkas, J. (2021). Racism, Hate Speech, and Social Media: A Systematic Review and Critique. Television & New Media, 22(2), 205–224. https://doi.org/10.1177/1527476420982230

Romano, A. (2021, January 7). What we still haven’t learned from Gamergate. Vox. Retrieved from https://www.vox.com/culture/2020/1/20/20808875/gamergate-lessons-cultural-impact-changes-harassment-laws