Introduction
Ethnic discrimination or hate speech has been rampant in cyberspace in recent years. As all smartphone users began to participate in social network discussions, especially on forums, blogs, Facebook, Twitter, and other social media platforms, the spread of hate speech was encouraged, which directly led to the rapid growth of relevant groups. Freedom of speech allows people to discuss their beliefs, thoughts, and ideas openly, but this freedom is limited. On the other hand, hate speech incites harm and violence against others and does not respect relevant restrictions. There is much debate about freedom of speech, hate speech, and hate speech legislation. Discuss the issues facing technology platforms and how to deal with hate speech by providing reliable data and case studies.
Hate speech “expresses, encourages, incites or incites hatred against a characteristic or set of characteristics, such as race, ethnic gender, religion, national origin, and sexual orientation” (Parekli, 2012, p. 40, as cited in Flew, 2022). Parekli (2012, as cited in Flew, 2022) pointed out that the three main characteristics of hate speech are as follows: according to a person’s norms, unrelated characteristics, targeted at a specific easily identifiable individual, or, more commonly, targeted at a group.
Some robust data on online hate speech
Anyone can be a victim of online hate speech. According to a Pew Research Center study in 2021 (Vogels, 2021), about 41% of Surveyed Americans have experienced online harassment; Overall, 20% of Americans have been harassed online because of their political views. More people were also harassed online because of their gender or racial and ethnic background. About half of those harassed do not know who the perpetrator is, as abusers often create fake accounts, use aliases, or take other steps to hide their identity. This makes it difficult for victims to trace their true identities. Authenticity mechanisms surrounding platforms and their advocacy of anonymity and pseudonyms contribute to online abuse (Matatoros-Fernandez, 2017).
Being a target for online hate speech can happen anywhere on the Internet, with social media sites consistently being the most commonplace for Americans to experience harassment online. Still, other online locations also have frequent incidents of harassment. According to a Pew Research Center survey in September 2020 (Vogels, 2021), three-quarters of American adults who face online harassment say it occurs on social media. But such experiences occur elsewhere, including forum or discussion sites (25%), messaging or messaging apps (24%), online gaming platforms (16%), personal email accounts (11%), and online dating apps (10%).
Have online platforms become hotbeds of hate speech?
The Internet provides a relatively free platform for people to speak out, making hate speech appear in the public eye. As Flew (2021) pointed out that even if people attach great importance to the freedom of speech, that it is indispensable to the liberty of thought forms, is the progress of human development, political life, and knowledge tool, hate speech repugnant because it contributed to mistrust and hostility in the society and denied the target group of human dignity. Hate speech and cyber violence are protected by law, and people can attack others or a group without any consequences.
Hatred may seem like a small thing in ordinary people’s lives, but unfortunately, it is becoming more and more down-to-earth. In the context of hate generalization, some extremists find it easier to gain support through social networks. Extremists who do not know each other in real life are looking for each other online and trying to strike “together” to create overlapping terrorist events. The ethnic specificity of racism is entangled with the medium specificity of the platform and its cultural values. Matatoros-fernandez (2017) proposes that this entanglement has given rise to a new form of racism expressed through social media — “platform racism”.
These conditions have become even more acute since Mr. Trump took office, with many supporters Posting a flood of hate speeches against minorities on anonymous online forums such as 4chan and 8chan. According to a CNN investigation, Google searches for hate speech spiked in the wake of the Pittsburgh synagogue massacre. In addition, searches for Jews rose sharply after the shooting in California. Another study revealed that abusive and problematic tweets about women are posted on Twitter every 30 seconds, with black women being the main targets.
Social media has almost become the best way for hate speech to vent and spread, and more and more users and media have begun to raise the need to regulate online platforms. “Free speech” should not be used as an excuse to “get out of hard choices”. While users have the right to express ignorant or misleading opinions, that doesn’t mean that some social media outlets can’t attach correct background information to those opinions or that they must spread them. For centuries, the primary way people got free speech was through books or the media. Much of this discourse system has been damaged by social media like Facebook, and Facebook has a responsibility to fix it or create a better plan.
Technology companies face a dilemma
Racial tensions have been exacerbated by a spate of racist hate speech on the Internet in recent years. Platforms promote racist dynamics through burdens, policies, algorithms, and corporate decisions (Matatoros-Fernandez, 2017). Governments and tech giants have never been more united to control hate speech online. In May 2016, the European Commission announced a partnership with Facebook, Twitter, YouTube, and Microsoft to own hate speech online. The Internet giants collectively signed a code of conduct promising to “block and remove hate speech within 24 hours of being reported”.
Striking a balance between the two, however, will not be easy. Despite the content issues, Facebook allowed most of the content to remain in the system. Facebook’s choices reassure those who spread misleading information, openly display racism, and incite violence. As Matatoros-Fernandez (2017) notes, while platforms still engage in neutral speech, they “interfere” with public discourse and often contribute to maintaining white identity, as do other technologies. Facebook has said it is on the side of free speech when it has long placed itself on the side of profit and cowardice.
Case Study — Some streamers received hate speech while live-streaming on Twitch
Twitch, a live streaming video platform focused on esports and video games, launched in June 2011. Twitch broadcasts cover a wide range of games, including almost every genre on the market. In a regular live broadcast, when one anchor is about to finish the day’s live broadcast, he sometimes encourages viewers to continue watching another anchor’s live broadcast. As a result, many viewers can suddenly flood into an anchor’s studio simultaneously. Such “raids” are part of Twitch’s culture and often show one anchor’s support for another. It is the diversion between anchors. On Twitch, where every streamer tries to reach more viewers, raids have become a common means of spreading popularity.
Instead of support, Raven was subjected to a vicious “raid” or “hate raid.” “Hate attacks” come with little warning, with hosts receiving only a notice of concern and the live chat filling up with hate messages. These remarks are often not aimed at the content of the live broadcast but directly at the host himself to carry out personal attacks. Matatoros-fernandez (2017) explains how abusive users can take advantage of the platform’s burden to harass victims by creating and spreading hate content or hijacking the technical infrastructure of social media sites for their benefit. Such comments can be challenging because they can come from artificially created bot accounts, which tend to drown out regular viewers’ conversations and make it difficult for anchors to take gag measures.
Raven complained that this was not the first time this had happened and that she had become overwhelmed with the increasing frequency since June. In July, seeking help, Raven posted a live video to Twitter in which the comments section of the studio was filled with racial slurs: “Hey, are black goths called Giggers?”

Figure 1: Racist comments in Raven’s studio. Photo credit: Twitter@RekItRaven
The video quickly struck a chord with other anchors. Raven isn’t the only one who’s been plagued by hate attacks. Raven has heard similar stories from other anchors. Despite the hosts’ attempts to tone down the intemperate comments on air, the situation has not improved.
A fevered group of streamers decided not to go live for the day to protest the platform’s poor policing of hate speech. Eventually, it turned into A “Day off Twitch” campaign that set the Internet abuzz in the US.

Figure 2: Three anchors launched the campaign #ADayOffTwitch. Photo credit: Twitter@RekItRaven
Many anchors have taken it upon themselves to respond to “hate attacks.” The platform’s promotion of shareability encourages users to spread racist visual content in a non-contextual manner. Social media buttons are essential, interconnected, and different across platforms (Matatoros-Fernandez, 2017). An anchor named Bee, for example, has set up a “panic button” to combat “hate attacks”. With the press of the button, Bee can disable the alert box, disable live chat, explicit ads, slow chat, and play ads in a flash. “I also have a reverse button that can undo those commands when things calm down,” Bee said.
How are tech companies dealing with hate speech?
Creating an account on Twitch is simple and creating many accounts with bots is a snap. Unless Twitch adopts methods such as requiring phone verification for registration, extreme comments will be inevitable. A Twitch spokesman said that the platform was working on tools to combat hate speech in an interview. Twitch also said in a previous statement that it might improve account verification or disable circumvention detectors or even create a page dedicated to hosting tools to combat harassment.
Like Twitch, many online social platforms have been Mired in hate, racism, anti-Semitism, and misinformation. In the face of this growing hate speech, tech giants are doing something about it.
In the face of this growing hate speech, tech giants are doing something about it. Facebook started flagging potentially dangerous speech and dealing with hate speech in ads. Twitter updated its rules and suspended some accounts. Reddit removed many hate-speech accounts from the top down.
In addition, many countries have enacted laws to check and balance platforms’ management of hate speech. The French National Assembly passed a bill on hate speech on May 1, 2020. The bill clarifies that platforms are obliged to remove hate content posted by users, and for content involving terrorists and child sexual abuse, media must remove it within one hour. If the platform fails to respond promptly, it could face penalties of up to 1.25 million euros. The European Union has also begun work on a Digital Services bill, which could define the responsibilities of platforms in dealing with risks faced by users and protecting their rights.
Conclusion
From these case studies and the experience of solutions implemented by the technology platform, we can see that the supervision of hate speech on the Internet is a vast project which needs to be rectified by various means. Tech companies should regulate hate speech on their platforms, but it’s still challenging to ask the media to address the issue independently. It is difficult for venues to reach a consensus on the criteria for judging hate speech. If the requirements for governing hate speech are too broad, the platform is deemed poorly implemented. And if the standards for hate speech are too strict, the platform can be accused of suppressing everyday free speech.
References:
- Amnesty (2018). Women abused on Twitter every 30 seconds – new study. Amnesty International UK. https://www.amnesty.org.uk/press-releases/women-abused-twitter-every-30-seconds-new-study
- Flew, T. (2021).Regulating Platforms. Cambridge: Polity, pp. 91-96.
- Matatoros-Fernandez, A. (2017). Platformed racism: the mediation and circulation of an Australian race-based controversy on Twitter, Facebook and YouTube, Information, Communication & Society20(6), pp. 930-946.https://doi.org/10.1080/1369118X.2017.1293130
- Simon, M. & Sidner, S. (2019). A gunman slaughtered 11 Jewish worshippers. Then people hunted for hate online. CNN. https://edition.cnn.com/2019/05/15/us/anti-semitic-searches-pittsburgh-poway-shootings-soh/index.html
- Vogels, E. (2021) ‘The State of Online Harassment 2021’, Pew Research Centre.https://www.pewresearch.org/internet/2021/01/13/the-state-of-online-harassment/
- Vogels, E. (2021) ‘Online harassment occurs most often on social media, but strikes in other places, too’, Pew Research Centre. https://www.pewresearch.org/fact-tank/2021/02/16/online-harassment-occurs-most-often-on-social-media-but-strikes-in-other-places-too/