- INTRODUCTION
The definition of online hate speech is offensive to a person or group, such as gender, race, religion, ethnicity, disability, or sexual orientation. Those who deliberately derogate, intimidate, or incite violence and prejudice against certain ethnic groups. With the development of the Internet, more and more people share their idea and comments by using smartphone or any other technical equipment.With the development of the Internet, the main battlefield of hate speech has moved to cyberspace. However, cyberspace is closely connected with the real space, and its complexity is increasing day by day. Therefore, it is urgent to solve the problem of hate speech on the Internet. This blog mainly introduces the causes and harms of hate speech and analyzes relevant countermeasures through relevant case studies.
- Network hate governance difficult reasons
It is hard to define the online hate speech. This is because hate speech includes not only verbal and written expressions, but also “symbolic speech.” The media richness of the Internet provides a compatible platform for the existence of various information types, and the decentralization of content production rights enables everyone to master the ability to use pictures, audio and video in addition to text information. For example, emoticons, which combine text, pictures and even animation, have also become an important form of hate speech.
The second reason of network hate is the .As of June 2020, the number of users in China has reached 940 million, and the Internet penetration rate is 67.0%(China Internet Network Information Center, 2020). And the decentralization of the Internet has lowered the access threshold for information release in the traditional media era, which leads to: First, the release of negative speeches such as hate speech is not blocked by any means, and the number of them is large, and the regulation on it can only be limited. This is achieved through post-mortem review. Second, the anonymous nature of the Internet makes it more difficult to investigate those responsible for speech. How to trace the many subjects behind the network ID? Who is the original author of hate speech among the many participating subjects? And anonymity can give users a layer of “psychological protection”, which is often referred to as the “vest” on the Internet. Such anonymity can Make users disregard the pressure of reality and participate in the “carnival” of expressing hate speech.
The third reason why hate speech is difficult to control is that online information spreads quickly and widely. An important feature of the Internet is immediacy. Press the send button to upload to the network. The speed of supervision of online hate speech is often unable to catch up with the speed of dissemination of hate speech, which makes it impossible to effectively regulate the dissemination of hate speech. At the same time, on the Internet, information can be spread to any corner of the earth in a short period of time, and the wide spread range increases the difficulty of monitoring and regulating hate information.
- The case of online hate speech
3.1. Jacob Wohl and Tommy Robinson
The removal of the accounts of two high-profile right-wing figures from Twitter has caused widespread concern. First there was Jacob Wohl, a well-known online prankster and conservative conspiracy theorist who has been banned from setting up fake accounts on Twitter.

Jacob Wohl’s account
A Twitter representative said that its account had been disabled for multiple violations of the platform’s usage rules and that it was not allowed to create and operate fake accounts. Wohl, a staunch Trump supporter, used the platform to spread false information about special Counsel Robert Mueller, Supreme Court Justice Ruth Bader Ginsberg, and Representative Ilhan Omar. Wohl’s Facebook account was still active at press time, and neither party responded to requests for comment. In a USA Today article Tuesday morning, Wohl said he planned to create several fake Facebook and Twitter accounts to channel the left-wing vote during the primary to what he sees as weaker candidates.

Tommy Robinson’s account
And Tommy Robinson, a far-right representative and founder of the English Defense League, has been banned from Facebook and Instagram for breaking hate speech rules. In a blog post on Tuesday, Facebook said it had no place for organized hate activities on its platform. Robinson’s account repeatedly ran afoul of the rules, Posting extremely worded material and promoting violence against Muslims. Robinson was banned from Twitter last year. Facebook and Instagram have not responded to requests for comment on the latest bans. In a high-profile case last year, companies banded together to block conspiracy theorist Alex Johns’s InfoWars show over hate speech and misinformation on political social networks. In 2017, Facebook disclosed that it removed about 66,000 posts a week involving hate speech. But Mr. Trump and some conservatives have expressed concern about the bias against right-wing figures and groups..
Different countries have different policies for the online hate speech. The United States and Germany are the most representative countries in the world for their restrictions on hate speech. The former is more tolerant of hate speech, preferring to tolerate the harm caused by hate speech to a certain extent rather than derogate the free value of hate speech. The latter strictly restricts hate speech at the expense of some aspects of freedom of speech in order to protect the personal dignity of citizens(Jiang Yong, 2015).
3.2. Electric campaigners and geng
The another example about the online hate speech happened on Twitter. Four French “keyboardmen” were sentenced to fines and compulsory education by a French court on Monday for Posting hate speech insulting Asians, especially Chinese, on Twitter. The four are French students aged between 19 and 24 and have no previous criminal convictions, AFP reported Wednesday. According to the charges of the prosecution, the four defendants have been spreading insults to Chinese people on social media, falsely claiming novel Coronavirus as being spread by Chinese people and saying malicious words. For example, one said: “Put me in a cage with a Chinese, and I will tease him, break him, and drive him to despair.” “I am not a virus” and “Stop viral hate speech” have become popular hashtags in France. Protesters also gathered outside the courtroom during the hearing.
The four student were found guilty of inciting public sentiment and assaulting others. In addition to the prosecution’s costs, they will have to pay a fine of about 1,000 euros and undergo mandatory civic education for two days, according to the verdict. Some legal experts said the sentence highlighted the importance the French judiciary places on hate speech online, but others said it was too light.
There is no doubt that states have different views on the extent to which societies should restrict speech – the extent to which a balance should be struck between one person’s fundamental right to express himself and another person’s fundamental right to security. For example, The Council of Europe’s standards and practices related to addressing hate speech guide the work of the Expert Committee on Combating Hate Speech (ADI/MSI-DIS). It prepared a draft recommendation on an integrated approach to hate speech within the framework of human rights, including in the online environment. The final recommendations will be adopted by the Council of Ministers and will provide non-binding guidance to member states. It will be based on the relevant case law of the European Court of Human Rights and will pay particular attention to the online environment in which most hate speech exists today(Council of europe, 2022).
- 4. solution
4.1. Identification of hate speech online
Twitter says Online Safety Bill needs more clarity (Jack Fenwick, 2021). This also means that improve the identification of hate speech online is important. Regardless of network hate speech is caused by what reason, the difference between the network hate speech and real world hate speech lies in its concealment, the anonymity of the Internet, liquidity makes hatred transmission using relevant legal alternative language and surface of domain name and website name to cover up the nature of hatred, and the formation of lead to prejudice. Due to the wide range and depth of dissemination, the lack of transparency in the dissemination process and other reasons, the internal operation process of online hate speech is often difficult to grasp. On the other hand, if we cannot know the spreading process of online hate speech, its intention of discrimination and hatred against vulnerable groups will also be hidden in binary code. Because of the fuzziness of language itself, it is not easy to distinguish the nature of language even in real society. In digital media, it is more difficult to determine whether speech has discriminatory or hateful intent. The concealment of discrimination formed by network and language can be dealt with from two aspects: artificial recognition and legal recognition(Chetty & Alathur, 2018).
Artificial identification of online hate speech. The complexity and concealment of online hate speech determine that its recognition skills need to be acquired in learning. Users’ response to online hate speech has limited effect on reducing online hate, but users’ sense of responsibility can not only build a culture of intolerance towards online hate, but also enable us to regain the initiative in the network space(James, 2011).
4.2. Technical regulation
Give full play to the role of technology supervision. Due to the anonymity and replication of the Internet, the effect of eliminating online hate speech by law is limited, while technical supervision provides a supplement for the legal model. Unlike in the real world, which is managed by laws and other norms, cyberspace requires us to observe its operation from a perspective beyond the law. Rules in cyberspace are made up of “code” composed of software and hardware. The hate speech can be blocked or removed through code technology. At the same time, geolocation technology can also restrict users’ access to and even filter out hate speech content by identifying users’ IP addresses. In addition, web users can use software to filter out sites that contain hate speech. For example, install SurfWatch software on your computer, which filters out sites involving violence and hate speech. Therefore, the use of technology supervision can reduce hate speech to a certain extent(Ullmann & Tomalin, 2020).
4.3. Education
Education plays an important role in reducing the hate speech. This is because human rights education can promote Internet users’ understanding of cultural diversity, thus maintaining respect and tolerance for different countries, minority groups, religious beliefs, etc., and reducing online hate speech. At the same time, human rights education can help identify online hate speech, increase the sense of responsibility of Internet users, and then oppose online hate speech. Therefore, strengthening human rights education is an indispensable part of dealing with hate speech on the Internet. Internet operators can include human rights education in their terms of use to improve Internet users’ understanding of hate speech and inform users that they have the right to delete or block hate speech if they post it. In school education, respect for human dignity and rights, respect for cultural diversity, and acceptance of differences between different groups and individuals should be integrated throughout education to cultivate qualified digital citizens in the new era. At the same time, we should strengthen the use of digital media as a new tool to guide rational debate when citizens freely express their views and reduce hate speech..
- 5. Conclusion
.New media are quietly influencing our lives and changing our thinking, but the harm of hate speech has also been exponentially amplified by digital technology, reaching a wider audience than ever before, and gradually eroding the right to equality. Therefore, we should be alert to the adverse consequences of new media and reflect deeply on the crisis of equality caused by hate speech.What is more, In order to overcome the limitations of Internet technology and the discrimination caused by media power to vulnerable groups, laws, technical supervision and education should be combined to solve the infringement of the right of equality caused by online hate speech. At the same time, we should also pay attention to the balance between equal rights and freedom of speech, and avoid over-correcting.
REFERENCES
The 46th Statistical Report on Internet Development in China. (2020, September 29). Retrieved from http://www.cac.gov.cn/2020-09/29/c_1602939909285141.htm
Jiang, Y. (2015). On Hate Speech and Its Limitations [D]. Tsinghua University.
Wu, Y. Y. (2020). Communication Characteristics of Internet Hate Speech and Discussion on Its Governance [J]. Guide to News Research, 11(24), 57–58.
Council of europe. (2022.). Online Hate Speech. Retrieved from https://www.coe.int/en/web/cyberviolence/online-hate-speech
jack fenwick. (2021, October 23). BBC NEWS. Retrieved from https://www.bbc.com/news/uk-politics-59010723
Chetty, N., & Alathur, S. (2018). Hate speech review in the context of online social networks. Aggression and violent behavior, 40, 108-118.
BANKS, James (2011). European regulation of cross-border hate speech in cyberspace: The limits of legislation. European Journal of Crime, Criminal Law and Criminal Justice, 19 (1), 1-13.
Ullmann, S., & Tomalin, M. (2020). Quarantining online hate speech: technical and ethical perspectives. Ethics and Information Technology, 22(1), 69-80.