Please stop hate speech online!

With the rapid development of the Internet today, the network platform has created a brand new life and cultural space, as well as promoted communication and exchange among netizens, resulting in the exponential growth of all types of information and speech in the network space. In the information age, the Internet facilitates the spread of hate speech, resulting in hate crimes, racial discrimination, and other issues that have had a significant negative social impact. Racial hate speech on the Internet will undoubtedly exacerbate prejudice and mistrust among ethnic groups and, if not addressed properly, may lead to serious criminal acts.

Hate speech should encompass severe behaviors that insult and assault others because they disagree with others’ beliefs, in addition to prejudice and discrimination against a group or a person’s ethnicity, religion, or gender. Any statement or action that violates the right to freedom of expression is unwelcome. Hate speech, both in the real world and online, is a violation of human rights, equality, and decency.

Hate speech’s history

It mainly lists three key periods in the development of hate speech: in the 1920s, under the social background of rampant Nazi extremist organizations, hate speech was almost equal to racial speech; in the 1940s, the extension of hate speech gradually expanded, which was called “group slander.” By the 1980s, academics in the United States began to pay attention to the “discrimination” and “equal rights protection” that underpin hate speech.

Hate Speech can also be spelled Hate Speech or Hate Speech. Historically, it has been referred to as “racial hate speech,” “sexist speech,” and so on. Provocative Speech, offensive Speech, and second-rate Speech are all examples of hate speech.

Hate speech is defined as written or spoken comments that target a person because of his race, ethnicity, religion, gender, or sexual orientation, and that such abusive statements may cause injury to others, including particularly serious psychological harm, according to DonR.Pember’s Mass Media Law. Hate speech, according to Richard Delgado and Jean Steffancic, is defined as “racial slurs, nicknames, or other harsh phrases used solely to damage or marginalize another individual or group.”

Hate speech: definition and concept

Hate speech is a problem with a long history. Following WWII, European countries began to prohibit hate speech in an attempt to curb racial and religious prejudice. Speech that incites racial and religious hatred is illegal in the United Kingdom and Canada, while hate speech that overtly targets ethnic minorities is illegal in Germany. A more comprehensive definition is provided by the International Coalition Against Cyberhate.

“Hate speech is based on a person or a group of real or perceived ethnic, racial, language, nationality, skin color, religion or no religion, gender, gender identity, sexual orientation, political belief, social status, property, such as age, mental health, disability, disease, intentionally stir up hatred, violence, or isolation, Intentionally or unintentionally, publicly discriminate,” according to the International Coalition against Cyber Hate’s “What is Cyber Hate” report.

Simultaneously, we must address racism, xenophobia, anti-Semitism, anti-Muslim prejudice, anti-Roma prejudice, homophobia, and misogyny.

Hate speech, in general, is any form of discrimination, insult, or attack directed towards particular group identity features, such as race, ethnicity, religion, or gender, with the intent of causing harm. We live in the information era, and new technologies, particularly social media, are more likely to be used to promote hate speech and harm people.”

Characteristics of the Internet’s Spread of Hate Speech

In comparison to traditional media, the Internet as a conduit for hate speech has distinct characteristics.

  • The range of ways to express yourself. Traditional hate speech is either spoken or written, however the Internet’s rich nature gives a venue for all types of data. Pictures, audio, and video, in addition to text communications, all contribute extra emotional value. The diversity of hate speech expression modes on the Internet makes network regulation more challenging. The following are the main points: To begin with, “symbolic discourse” embodied by emoticons frequently takes the shape of mockery and irony, making it difficult to control bias, discrimination, and hostility.Second, hate speech of this nature will encourage others to engage in “carnival” communication, resulting in the formation of implicit and widespread unfavorable social sentiments toward specific groups in cyberspace.
  • Anonymity and a low threshold for speech lead to personal accountability. Anonymity provides psychological security to many individuals, allowing them to participate in hate speech despite reality pressures. Furthermore, the Internet’s decentralization lowers the barrier for information transmission in the process of media creation, resulting in the following: To begin with, there are no barriers to the expression of hostility, hatred, and other unpleasant vocabulary, and there are many. And the only way to control their behavior is to conduct an after-the-fact investigation.Second, the anonymity of the internet makes it difficult to identify those who are responsible for offensive communication. How can I keep track of the several principals hidden behind network ids? Who is the source of the hate speech among the audience members? Anonymity is sufficient to provide netizens with a layer of “psychological security,” allowing them to ignore the pressures of reality and participate in “crazy” hate speech.
  • The transmission rate is high, and the breadth is broad. The immediacy of the Internet is a key characteristic. By pressing the Send key, you can upload the Internet to the Internet. The rate at which online hate speech is regulated cannot keep up with the rate at which hate speech spreads, and the propagation of hate speech cannot be successfully regulated. Meanwhile, via the Internet, information may be quickly disseminated to any part of the globe. It is more difficult to monitor and regulate hate communications due to the widespread circulation of information.
  • Set off a chain reaction in the group. This clustering behavior can be divided into two types. The dominating group targets the disadvantaged group in the first instance. The dominant group can be the real-life dominating group or a comparable group formed in the network environment, such as a group with opposing viewpoints. Individuals seeking identification, building group connection, wanting a sense of existence and attracting attention, and seeking an outlet for pressure in the anonymous network environment are all common goals in this type of group behavior.

Case analysis of online hate speech

On March 15, 2019, Australian Brenton Tarrant broke into two mosques in Christchurch, South Island of New Zealand, and opened fire, killing 51 people and injuring dozens of others. It was the deadliest terrorist attack in New Zealand’s history. During the incident, the killer broadcast the massacre live on Facebook and thousands of people watched it. In the first 24 hours after the incident, the social media platform Facebook removed 1.5 million videos of the shooting and blocked 1.2 million uploads.

Two months after the case, New Zealand’s Prime Minister and France’s President launched the Christchurch Appeal to combat the spread of “terrorist and violent extremism” on the internet. Amazon, Facebook, Dailymotion, Google, Microsoft, Qwant, Twitter, and YouTube are among the tech corporations that have agreed to join the call to action. Meanwhile, Microsoft, Twitter, Facebook, Google, and Amazon issued a joint statement claiming they would work together to develop specific measures to combat the use of technology to promote terrorist propaganda.

Why launch the Christchurch Appeal against hate speech online, which aims to encourage tech companies and countries to work together to end the promotion of terrorist acts through social media platforms? Here is my opinion

  • It is difficult to erase data. When terrorists and extreme violent acts use the Internet as a means of disseminating information, it will cause social unrest. Furthermore, when describing cyber terrorist incidents, the media should adhere to moral standards and avoid amplifying terrorist and violent extremist content. If such content is disseminated online, it has a negative impact on the victims’ human rights and the safety of our society as a whole, with the possibility of secondary harm to the victim group.
  • Increase the number of hate crimes. The primary goal of disseminating such content is to promote or incite hatred. Racial hate speech on the Internet will undoubtedly deepen prejudice and distrust among various ethnic groups, as well as attempt to exclude and divide the entire group, intensify contradictions, and deepen racial discrimination, leading to an increase in hate crimes.
  • The state of the network deteriorates. In May 2016, the European Commission said that it would work with Facebook, Twitter, YouTube, and Microsoft to combat illegal hate speech on the Internet in order to clean up the Internet environment and provide a welcoming cyberspace for users.

 

Disparities in hate speech governance around the world

There are significant variances in the perception and regulation of hate speech among countries due to historical and cultural differences.

Following the case, then-President Donald Trump stated that the US will not join the Christchurch Call, which was spearheaded by New Zealand and France.”While the United States is unable to participate at this time, we continue to support the call’s overarching goals,” the White House said in a statement. To prevent the dissemination of terrorist content on the Internet, we will continue to collaborate with governments, industry, and civil society.” The declaration used harsh language to denounce terrorist and violent extremist content on the internet, but the United States declined to join the appeal citing “freedom of expression” and “freedom of the press” as grounds.

In comparison to the United Kingdom, Canada, and Germany, the United States has no legislation prohibiting hate speech on the internet. On Twitter, US President Donald Trump made a lot of derogatory remarks regarding Mexicans. His opponents have accused him of using hate speech, but his supporters argue that he is exercising his right to free speech. As a result of these cognitive differences, the US is under more pressure to deal with hate speech on the Internet, and the social harm caused by online hatred is growing by the day. The governance effect, in fact, is unsatisfactory.

France is believed to have one of the harshest legal regimes in the world when it comes to hate speech, and hate speech laws have been in place for a long time. It has been concerned with press freedom since 1881, prescribing the boundaries of freedom of speech as well as the penalties for defamation, false information, racial, ethnic, and religious discrimination, insult, incitement to hatred, and encouragement to violence.

conclusion

The growth of hate speech on the Internet has three major consequences: first, it is difficult to erase information; second, it pollutes the atmosphere of interconnected networks; and third, it has an impact on actual operations. 

First, the message will be saved indefinitely once it is sent to the interlink network. As a result, there is still a lot of severely invasive, hostile, and threatening hate speech on the Internet, which is creating long-term and potential harm to victim groups and individuals.

Second, in the interconnected network environment, the negative surface exists in the polluted environment, posing a threat to network security, and the harm and aggressiveness contained therein will not fade away with time. There may be a follow-up discussion, or even a follow-up attack, with the victim and the group suffering long-term consequences. More significantly, a “spiral of silence” effect will occur, in which the dominant group in the attack position will become more dominant, while the inferior group will tend to mute under duress, resulting in implicit negative feelings and negative pressure in the network space.

Finally, as the physical and virtual worlds become increasingly intertwined, online emotional impact will morph into acceptance and recognition of hate speech in real life, influencing people’s perceptions of certain subgroups and even triggering offline violent behavior, posing a threat to social security.

Companies that supply social media platforms should consciously improve their social duties, give some technical support for hate speech management, respond to infringement reports quickly, and make the infringing content evident. It is not permissible to use offensive words, to attack people’s race and identity, to discriminate and damage others using language from a moral and ethical standpoint. Finally, the goal is to develop “digital citizens” who will be able to receive enough and responsible information about social media, as well as grasp political, social, and cultural rights, the right to share information on social media, and the consequences of breaking social media laws.

Reference

BryanA.Garner (ed.) .Black’ SLaw Dictionary (10 ed.), 2014Thom son Reuters,2009 ,P.1618.

RoniCohen.Regulating HateSpeech: Nothing Custom ary AboutIt, 15 Chi. J. IntlL. 229, 2Ol4— 2O15

Yahoo. Twitter Facebook join global pledge to fight hate  speech online [OL]. (2019-05-16) [2019-08-06]

Cato at Liberty. 82% Say It’s Hard to Ban Hate Speech Because People Can’t Agree What Speech Is Hateful [OL].(2017-11-08)[2019-08-08]

European Commission. Code of conduct on countering illegal hate speech online fourth evaluation confirms self regulationworks[R].Brussels:Directorate-General for Justice and Consumers 2019

https://baike.baidu.com/reference/19502093/b3capo3nVWByVHSYf5gPxlkxO-jwprNi-3uALSl6Wn5pJegy97aPj0cEalc0gYi4mdrUcpVmTV8NlPaupvUDqp2GSgrSbXV9jEAqo2CmxAi-tzbyLOGzZfw