Introduction
After decades of development, the Internet has reached unprecedented heights. The increasingly advanced Internet technology has made it easier for people to reach more people on online social platforms. This has brought many benefits to us. People can be free to express their opinions and influence others on the Internet. There is no doubt that freedom of expression is protected by international and domestic law as one of the fundamental human rights (Nations, n.d.). At the same time, online hate speech, as its concomitant, is becoming an issue of great concern on major digital platforms. Hate speech is, unfortunately, difficult to define. Hate speech covers many forms of expressions that advocate, incite, promote, or justify hatred, violence, and discrimination against a person or group of persons for a variety of reasons (Hate speech and violence. n.d.). Such hate speech can easily be posted on major social media platforms in the name of freedom of expression. This not only intensifies the conflict between different groups but also may lead to various problems such as hate crimes and racial discrimination. This has a very negative impact on the stability of society and the well-being of people.

Image source: https://images.app.goo.gl/Etkv2nUe8aHLcNv98
Therefore, to maintain a stable social order, hate speech must be effectively restricted by law. However, in terms of actual regulation and governance, the differences in national laws and the lack of international treaty regimes make it more difficult to address this issue. In many cases, conflicts between countries are the cause of online hate speech and even online harassment and online harm. In addition, as a venue for online expression, social media platforms such as Facebook, Twitter, YouTube, Tiktok, and others have an inescapable responsibility to do so. They should provide a safe online opinion space for users. However, the major platforms are not doing a satisficing job now. For example, in a survey conducted, 41% of Americans surveyed said they had experienced online hate and harassment in 2021 survey (Online Hate and Harassment: The American Experience 2021, 2022). Therefore, this blog will combine examples of hate speech on various platforms to discuss the current issues of hate speech and how to reduce and avoid hate speech when participating in online discussions.
Is hate speech far from us?
In the age of digital media, people can easily publish their ideas. The Internet allows online speech and content to be shared anonymously, often without consequence (Outreach, 2020). This has led to hate speech occurring almost every day. Especially since the COVID-19 pandemic, online hate speech on the Internet has increased. According to a survey report, instances and discussions around online hate have increased by 38% since March 2020 (Uncovered: Online Hate Speech in the Covid Era, n.d.). Online hate speech increased by 20% in the UK and the US (Baggs, 2021). One reason for this is that people are spending more time online at this worrying time because of quarantine measures. This fear of life spread quickly through the Internet.
This is particularly evident in the hate speech directed at Asians, such as on Twitter、Facebook. As we all know, the Covid-19 outbreak first broke out in Wuhan, China, and then spread to the world. Fear of an unknown virus has fostered racial hatred of Asians on the Internet. Anti-Asian hate speech increased by 1,662% in 2020. This trend culminated in the declaration of the COVID-19 pandemic in March 2020, which referred to the virus as the “Chinese virus” or “Kung flu” (Woollacott, 2021,). Many Asians get verbal abuse or unkind images online just because of their Asian face. These horrible hate speeches are not only curses and threats on the Internet but also violent actions offline. Violent attacks such as the beheading of a Filipino man in New York (Medenilla,2022) and the murder of six Asian women who worked in massage parlors in Atlanta (Fausset, el at., 2021) have left Asians in a state of panic, fearing further harm. Thus, these cases force people to think about what online hate Speech brings to people. These actions have gone beyond the bounds of freedom of speech, which infringed on the basic rights of others, causing social unrest.
These hate speeches are somehow leaded. Former American President Trump mentioned “Chinese virus” “Kung Flu” and other words on public occasions many times, which undoubtedly exacerbated the spread of hate speech (Lee, 2020). It is obvious that as influential political public figures, they should be more careful about what they say and do. This is because they are likely to shape the currents of thought. Once an inappropriate remark continues to ferment. They are likely to cause uncontrollable consequences. The combination of the pandemic and negative rhetoric has led to a nasty bout of racism and abuse on the Internet.
Moreover, Online hate speech against Asian is not an isolated case of hate speech between races or countries. During the recent conflict between Russia and Ukraine, Facebook has been flooded with hate speech directed at Russians and various forms of “Russophobia”. It is worth noting that Facebook and Instagram allow users in certain countries/regions to post normally prohibited content, including calls to harm or even kill Russian soldiers or politicians (Lawler, 2022). Similarly, Instagram and Facebook were banned in Russia after their parent company Meta was declared an “extremist group” by a Moscow court (Euronews, 2022). It can be found that social media platforms have played a key role in spreading hate speech.

Image source: https://images.app.goo.gl/hRdSHH1RWcrg1ZXH6
Whether social media fuels Online hate speech
Social platforms play an important role in information dissemination. This important core of communication strategy serves as a powerful tool in business, politics, and even war (Verma, Upadhyay & Heramb, 2021). But these social media posts present individuals with less-than-complete and truthful content.
First, most social platforms are business platforms. Simply put, their way of operation is to make more people spend more time on the platform (Laub, 2019). What kind of topics can catch the attention of users? There are bound to be controversial topics. These operating rules are likely to be used by conspirators, hate groups, etc., to establish organized and premeditated hate speech propaganda channels. At the same time, once these controversial topics such as gender, race, and sexual orientation are heated on the Internet, it means that many people can get more attention from them, which has become a shortcut (Chetty & Alathur, 2018). Being an influential influencer is likely to receive advertising or other benefits. As a result, for a variety of reasons, more and more egregious speech forms online hate speech, which is easy to sneak up on online platforms.
In addition, algorithms and AI have become a part of social platforms. This means that social platforms also have algorithmic and technical flaws. Social platforms like Instagram, Facebook, YouTube, etc., are not just a completely undifferentiated exchange of information. After constant iterations of technology and algorithm upgrades, content on social platforms cannot be fully represented. Social platforms will customize the content pushed by users according to their personal history, preferences, and other preferences. This creates a ‘filter bubble’, which means the information of individuals accessed is selected and one-sided (Pariser, 2011). As a result, users always see what they want to see, for example:
“YouTube may be one of the most powerful radical tools of the 21st century,” writes sociologist Tufekci (2018).
“YouTube’s autoplay feature where the player plays a related video at the end of a video] can be particularly harmful. According to a Wall Street Journal investigation (Nicas, 2018), the algorithm drives people to watch videos that promote conspiracy theories or are “divisive, misleading, or false.”
Thus, this algorithm allows more users to participate in platform activities. It also makes hate speech more likely.
Who governs hate speech?

Image source: https://images.app.goo.gl/ZksXuHWPJNY5P9oj6
For most normal users, it is necessary to reduce and organize hate speech, because these speeches have seriously affected the online environment and mental health of people. Creating a good Internet environment for governments and platforms is critical to creating a stable social environment. There is basic agreement on this issue. Nevertheless, ordinary users are at an absolute disadvantage in hate Speech. Therefore, this requires the nation-state governments, non-governmental organizations, and companies to bear their responsibilities for governance (Gorwa, 2019).
The official attitude
International Convention on the Elimination of All Forms of Racial Discrimination. (1965) obliges States to “condemn racial discrimination” and adopt measures aimed at “eliminating all forms of racial discrimination and promoting understanding among all races,” while committing “not to sponsor, defend or support racial discrimination by any person or organization.” States need to “ban and end in all appropriate ways… Racial discrimination by any person, group or organization “and to” prevent anything that may exacerbate racial division.”
The Committee on The Elimination of Racial Discrimination (CERD) was set up to ensure the smooth implementation of the provisions. Meanwhile, a number of organizations such as the Anti-defamation League in the US, The Online Hate Institute in Australia, and the No Hate Speech Movement in Europe were gradually established. However, focusing on each different country, they all have different laws, and direct regulation is difficult in practice. Thus, while all three parties are responsible for online hate speech, platform self-governance has the most direct impact. In fact, this is not an easy task.
Social platform practices
Different social platforms have different rules when it comes to hate speech. Most platforms will delete such information or put a gag on the publisher. This requires major platforms to screen harmful information. These platforms rely on a combination of artificial intelligence, user reporting, and staff known as content moderators to enforce their rules about appropriate content for regulation (Laub, 2019). Nevertheless, they did not do the job very well. Firstly, platform moderation will take up some economic costs for commercial companies, especially for human moderators. Human moderators can more accurately identify ambiguous hate speech or controversial images, but they can mean more employees and more money. Likewise, human moderators are not satisfied with this job either. Not only do they have to face the computer to mark thousands of messages every day, which is rather dreary, but also some extreme information and pictures may affect their mental health. Some content moderators, after repeated exposure to certain material, begin to accept marginal views of the videos and memes they should be moderating (Newton 2019).
Companies are actively promoting artificial intelligence moderation to reduce these disadvantages, but is everything all right? Artificial intelligence is not that ‘intelligent’ and lacks the ability to understand the meaning behind the words. This can cause mislabeling. It may delete harmless information at the expense of truly toxic information. Furthermore, AI also has hard-recognized non-English languages.
According to Sinpeng (2021), on Indian pages, Facebook filters failed to catch vomit emojis posted in response to photos of gay weddings and rejected some very explicit defamatory reports.
Thus, there is still a long way to go in terms of governance and curbing online hate speech.
Conclusion
Online hate speech happens close to each one of us. Its targets may be related to people’s skin color, gender, religion, etc. As members of the online space, we cannot be sure if we are going to be the next target of hate speech. Therefore, ordinary people need to maintain vigilance against hate speech and establish correct values. Hate speech can only aggravate the insecurity of people about life and social instability. It does not do much to solve the real problems we need to face. Especially in this pandemic era, the online time of everyone is increasing. It is therefore critical to reduce the spread of hate speech. From the current development, the actual effective supervision is not satisfactory. It is hoped that major social platforms can launch stronger and more fair attitudes to deal with online hate speech in the future to create a comfortable and safe online space.
Reference
Baggs, B. M. (2021, November 15). Online hate speech rose 20% during pandemic: “We’ve normalised it.” Retrieved April 1, 2022, from BBC News website: https://www.bbc.com/news/newsbeat-59292509
Euronews. (2022, March 21). Euronews.com. Retrieved April 1, 2022, from euronews website: https://www.euronews.com/next/2022/03/21/ukraine-war-facebook-temporarily-allows-posts-calling-for-violence-against-russians-or-put
Fausset, R., Robertson, C., Bogel-Burroughs, N., & Keenan, S. (2021, March 18). Suspect in Atlanta Spa Attacks Is Charged With 8 Counts of Murder. The New York Times. Retrieved from https://www.nytimes.com/2021/03/17/us/atlanta-shooting-spa.html
Flew, T. (2021). Regulating platforms. Cambridge, UK ;: Polity Press.
Gorwa, R. (2019). The platform governance triangle: conceptualising the informal regulation of online content. Internet Policy Review, 8(2). https://doi.org/10.14763/2019.2.1407
Hate speech and violence. (n.d.). Retrieved March 31, 2022, from European Commission against Racism and Intolerance (ECRI) website: https://www.coe.int/en/web/european-commission-against-racism-and-intolerance/hate-speech-and-violence
International Convention on the Elimination of All Forms of Racial Discrimination. (1965). Retrieved April 2, 2022, from OHCHR website: https://www.ohchr.org/en/instruments-mechanisms/instruments/international-convention-elimination-all-forms-racial
Laub, Z. (2019, April 11). Hate Speech on Social Media: Global Comparisons. Retrieved March 31, 2022, from Council on Foreign Relations website: https://www.cfr.org/backgrounder/hate-speech-social-media-global-comparisons
Lawler, R. (2022, March 10). Facebook allows posts with violent speech toward Russian soldiers in specific countries. The Verge. Retrieved from https://www.theverge.com/2022/3/10/22970705/russia-ukraine-moderation-facebook-instagram-hate-speech-violence-policy
Lee, B. Y. (2020, June 24). Trump Once Again Calls Covid-19 Coronavirus The ‘Kung Flu.’ Forbes. Retrieved from https://www.forbes.com/sites/brucelee/2020/06/24/trump-once-again-calls-covid-19-coronavirus-the-kung-flu/?sh=762c87651f59
Medenilla, R. (2022, March 26). Filipino American man slashed in the face while riding NYC subway —. Retrieved April 1, 2022, from Asian Journal Media Group website: https://www.asianjournal.com/usa/newyork-newjersey/filipino-american-man-slashed-in-the-face-while-riding-nyc-subway/
Nations, U. (n.d.). Universal Declaration of Human Rights. Retrieved March 31, 2022, from United Nations website: https://www.un.org/en/about-us/universal-declaration-of-human-rights
Newton, C. (2019, February 25). The secret lives of Facebook moderators in America. The Verge. Retrieved from https://www.theverge.com/2019/2/25/18229714/cognizant-facebook-content-moderator-interviews-trauma-working-conditions-arizona
Nicas, J. (2018, February 7). Tech. Retrieved April 2, 2022, from WSJ website: https://www.wsj.com/articles/how-youtube-drives-viewers-to-the-internets-darkest-corners-1518020478
Online Hate and Harassment: The American Experience 2021. (2022). Retrieved March 31, 2022, from Anti-Defamation League website: https://www.adl.org/online-hate-2021
Outreach, R. (2020, February 26). Hate speech regulation on social media: An intractable contemporary challenge. Research Outreach. Retrieved from https://researchoutreach.org/articles/hate-speech-regulation-social-media-intractable-contemporary-challenge/
Pariser, E. (2011). The filter bubble: what the Internet is hiding from you. London: Viking.
Sinpeng, F. R. M. & A. (2021, July 5). How Facebook’s failure to pay attention to non-English languages is allowing hate speech to flourish… The New Indian Express. Retrieved from https://www.newindianexpress.com/world/2021/jul/05/how-facebooks-failure-to-pay-attention-to-non-english-languages-is-allowing-hate-speech-to-flourish-2325801.html
Tufekci, Z. (2018, March 10). Opinion. The New York Times. Retrieved from https://www.nytimes.com/2018/03/10/opinion/sunday/youtube-politics-radical.html
Uncovered: Online Hate Speech in the Covid Era. (n.d.). Retrieved April 1, 2022, from Brandwatch website: https://www.brandwatch.com/reports/online-hate-speech/view/
VERMA, UPADHYAY & HERAMB. (2021, July 5). Social media platforms: Pressing need for neutrality. Retrieved April 1, 2022, from The Daily Guardian website: https://thedailyguardian.com/social-media-platforms-pressing-need-for-neutrality/
Woollacott, E. (2021, November 15). Anti-Asian Hate Speech Rocketed 1,662% Last Year. Forbes. Retrieved from https://www.forbes.com/sites/emmawoollacott/2021/11/15/anti-asian-hate-speech-rocketed-1662-last-year/?sh=27279d3d59f1