Online Hate Speech: The Challenge of Regulation

Online Hate Speech: The Challenge of Regulation

 

Introduction

In the age of the Internet, social media platforms such as Twitter and YouTube allow users to interact with each other by sharing information and content. With over 4 billion people using the Internet, the sheer number of users has a huge impact on society. We can share information on social media to rescue stray animals or raise funds for areas affected by rainstorms, and we can also participate in the communication of decisions made by social organizations including the government. However, as a double-edged sword, hate speech, including violent and discriminatory content, has also started to spread rapidly and wildly on the free speech platform of the Internet. Even though some laws and measures are being taken to control online hate speech, the problem of these offensive messages seems to be getting worse and worse. Faced with the constraints of the law or the social platforms themselves, articles against the controls suggest that these treaties are unjustly infringing on their right to free speech. But others argue that these controls are imperfect, or even just a facade, and do not fundamentally address the proliferation of hate speech on the internet (Howard, 2019). Why exactly is online hate speech difficult to address, how exactly are the boundaries of online hate speech defined, and whether we should actually condone speech content that includes hate messages. These three questions will be discussed below.

 

Why so free?

Although most users, regions, and countries emphasize that hate speech on the Internet should be regulated, social media platforms often act with a blind eye. The reasons why these social media platforms tend to take a cold shoulder to such speech fall into two categories: the exploitation value of hate speech and the blind spot of legal regulation. Regarding the exploitation-value of hate speech, it is because of the influence it generates. Hate speech is specific in that a racially offensive piece of content tends to provoke more attention and discussion among users than some bland life-sharing content swiped on Twitter because the offensive content itself is inherently attractive and stimulates users’ mood swings and attention (Sparks & Sparks, 2000). Such an appeal, even if it has a negative impact, is compatible with the business purposes of the Internet companies behind these social media platforms. This is because, under the fierce competition of the same type of social media platforms, they can effectively use user data, advertising, or other measures to maximize their business interests only if they get the attractiveness and attention of their users. For this reason, some platforms have chosen to indulge in such vicious competition as hate speech.

On the other hand, the law does not make specific regulations on hate speech on social media, and this issue still needs to be explored by regulations. For example, in the United States, the First Amendment he introduced only mentions government control over freedom of speech. Similarly, Section 230 of the U.S. Communications Decency Act (CDA) grants immunity to large technology companies so that corporate parties are not sued for content posted by users on their platforms. This is certainly a green card that gives social media platforms or Internet companies, immunity from liability. As a result, social media platforms place themselves in the position of information carriers rather than gatekeepers, avoiding the responsibility of reviewing information.

In actual reality, it is not that the law does not want to make regulations, but rather that hate speech is more difficult to regulate on the Internet than in the real world. One reason is that industry self-regulation is low. Some countries prefer to regulate the Internet through market regulation and industry self-regulation, but the commercial operation of the Internet is profit-oriented, which determines that it cannot be truly regulated from the perspective of the public interest of society.  Moreover, some Internet companies benefit by calculating the differences between national regulations, and the self-regulatory behavior within the industry remains relatively fragile (Haufler, 2013). Another is the liberal nature of the Internet, which reaches across countries, regions, and cultures, and its user base is extremely large. In addition to regulations becoming difficult to use, when regulating hate speech, it is difficult to control that no new alternative sites or software for hate speech emerge.

 

Is it hate speech or not?

In reality, freedom of speech is protected by national law. But on the Internet, freedom of speech has become difficult to define. On these social media platforms, users are able to share any speech or content they want to express. Even though offensive content in hate speech against racial discrimination, sexism, and discrimination against vulnerable groups is defined as illegal in the law, these subjective notions of understanding without specific judgments create a divergence in the determination of freedom of expression and hate speech (Fiss, 2018). For example, in June 2019, YouTube faced controversy for refusing to remove a video of conservative commentator Steven Crowder. In the video, Crowder used an insulting epithet for homosexuality to describe Vox reporter Carlos Maza. However, YouTube said that Crowder, as a political commentator, was not violating the platform rules as his video was mainly expressing political views, so it did not remove the video. Subsequently, although YouTube removed advertising revenue from Crowder’s channel, no other penalties were imposed, a practice that has caused some groups to be upset that YouTube has discarded its own code of ethics for the sake of commercial profits. The definition of hate content on the web, especially on various social media platforms, is often dependent on the standards of the social media companies. These standards, whether they are a matter of understanding or profit, are inevitably subjective, so they will be enforced according to the subjective values of those who implement them. What one group considers free and legitimate speech may be considered inflammatory by another group, and the definition of hate speech is difficult to balance.

Under the contradiction of commercial audit, a controversy is raised: In the digital age, is it possible to use Internet technology to solve the problem of hate speech more efficiently by using artificial intelligence and algorithms to supplement purely manual vetting. The answer is that algorithms are useful, but they are also incomplete. While an algorithm can accurately capture and block offensive textual content containing discrimination and violence, and can be upgraded by continually increasing model complexity and database learning, it also has limitations. A one-size-fits-all approach becomes inapplicable when the algorithm encounters some subtle differences (Tolan, 2019). For instance, the algorithm cannot identify some subtle political content, and its one-size-fits-all approach may cause misinformation. An example of algorithmic mischief is that Facebook’s algorithm removed some pro-left-wing political cartoons. These cartoons often use some satire to make their point, such as opposing Trump’s epidemic disposal measures (see Figure 1.). A group of doctors had a hard time inventing masks and when they presented them to Trump, Trump said that it looked like people were very weak. The cartoon also ends with a satirical look at Trump’s pursuit of the drug hydroxychloroquine.

While algorithms can quickly process large amounts of content information, the complex process of classifying user-uploaded material into acceptable or rejected piles is far beyond the capacity of software or algorithms, except for the issue of information size (Roberts, 2019). These include social, cultural, commercial, and legal content from around the world, requiring a complex distinction between regional contexts, much like identifying political satire content. This may appear to be a satire of Trump with no substantial abusive content, but for algorithmic, or even manual, reviewers, the use of satire may become difficult to understand because of unfamiliarity with regional culture or politics, ultimately leading to mis-deletion. Some attack content lends itself to automatic algorithmic filtering, but for some overly complex content, it is likely to cause additional problems, such as algorithmic discrimination, if this content is not handled properly.

Figure 1. Cartoon satirizing Trump

Source: The New York Times

 

Tolerance or Struggle 

The controversy surrounding hate speech has continued as major online platforms continue to tighten their regulation of hate speech. Opponents of regulation argue that by restricting one type of speech, the right to freedom of expression as a whole is compromised, and that the emergence of such hate speech is the result of a real social display. However, it has also been argued that while some people enjoy the freedom of speech, vicious hate speech makes the recipients live with the harm caused by discrimination. Whenever there is talk of restricting hate speech, these opponents determine that restricting hate speech is an inexcusable violation of free speech. Restricting hate speech is not a way to regulate and shackle the minds of the masses or overshadow the conditions of the real world, but rather reasonable protection of the Internet’s architecture and the rights of Internet users to their sense of self-efficacy and security.

The deterioration of the Internet’s environmental architecture and tolerance of hate speech has greatly affected the civility of online discourse and may ultimately lead to a lack of rationality and the manipulation of human autonomy by malicious messages (Kozyreva, Lewandowsky & Hertwig, 2020). Many people are not sensitive to hate speech or do not perceive malicious messages to be cognitively impactful enough for them. This is because they are in a dominant position in the social structure and ignore the oppression of hate speech against vulnerable groups such as women, minorities, and sexual minorities(Bilewicz & Soral, 2020). An unrestricted tolerance of hate speech can cause minorities to suppress their expression out of fear, thus becoming more silent and marginalized. When hate speech creates this fixed environment of violence and anger on social media, on the one hand, it can negatively affect the social platforms themselves, creating a violent or negative brand image for users and attacking the commercial profits that Internet companies are most concerned about as business companies. On the other hand, indulging in hate speech can leave vulnerable groups under attack by hate speech for a long time, and even have an impact on their lives forming social prejudice. For example, the algorithm’s discrimination problem, as discussed earlier, the algorithm is not mature enough to screen hate speech. Incomplete algorithms are likely to select users’ hate speech for learning, and even reinforce such discrimination in their learning, such as the discriminatory and pornographic results that appear in the Google search engine for black female terms.

 

Conclusion

The ease and anonymity of information sharing under social platforms have allowed people the freedom to play their ideal role, even if it is that of a hate speech maker. However, everyone has the right to be free from hostility, violence, and discrimination, whether on the Internet or in the real world, and hate speech undermines this important public good and deprives them of the dignity they should enjoy in society. The deterioration of the Internet architecture by these malicious messages affects the Internet environment of civilized dialogue, and can even affect the sense of autonomy and proper perception of Internet users. The difficulty is that hate speech remains poorly defined because of issues such as subjective understanding and objective neglect. Therefore, in addition to legal requirements and the assistance of corporate algorithms to stop hate speech, we must also see the issues of gender, race, and class that are obscured by freedom when discussing whether hate speech is free speech. Restrictions on hate speech are not only a way to respect and protect each group, but also to make dialogue in the Internet environment more valuable and meaningful.

References

Bilewicz, M., & Soral, W. (2020). Hate speech epidemic. The dynamic effects of derogatory language on intergroup relations and political radicalization. Political Psychology, 41, 3-33.

Fiss, O. M. (2018). Liberalism divided: Freedom of speech and the many uses of state power. Routledge.

Haufler, V. (2013). A public role for the private sector: Industry self-regulation in a global economy. Carnegie Endowment.

Kozyreva, A., Lewandowsky, S., & Hertwig, R. (2020). Citizens versus the internet: Confronting digital challenges with cognitive tools. Psychological Science in the Public Interest, 21(3), 103-156.

Roberts, S. (2019). 2. Understanding Commercial Content Moderation. In Behind the Screen: Content Moderation in the Shadows of Social Media (pp. 33-72). New Haven: Yale University Press. https://doi-org.ezproxy.library.sydney.edu.au/10.12987/9780300245318-003

Sparks, G. G., & Sparks, C. W. (2000). Violence, mayhem, and horror. Media entertainment: The psychology of its appeal, 4(2), 73-92.

Tolan, S. (2019). Fair and unbiased algorithmic decision making: Current state and future challenges. arXiv preprint arXiv:1901.04730.