Online Hate Speech

Introduction

It is evident that digital platform companies have increasingly attracted the attention of their users and other stakeholders such as regulators, policy-makers, and legislators. Some of the reasons for this trend are the persistence of online hate speech and harassment and the increasing threat to privacy in the digital age. The vices have continued despite the fact that these platforms are believed to have the mandate to moderate content. The challenges have been complicated by the rapid expansion of these platforms as many people acquire smartphones and other gadgets and sign in as users. Given the large amounts of personal data under the control of these companies, cases of fake news, hate speech, and harassment has been increasing. Since online issues mainly concern individual privacy, there has been a belief that they cannot adequately be handled by various social media platform companies . Hate speech is an online issue that has been of great concern to the public, which has been thriving because of the monopoly, various complexities, and the inability of online social platforms to address public concerns.

Online Hate Speech

Hate speech

Hate speech encourages hatred against the targeted person. It should, however, be noted that neither hate speech nor its language delivery incites people to public violence. The observation is because many individuals and groups who use hate speeches do tend to coach their arguments as ironies, jokes, or put them a scientific language. As such, there is a continuum between discriminatory statements and violence-advocating statements. It is this continuum that makes it easier for individuals and various digital and internet platforms to escape their hate speech and the manner in which it is handled respectively. Whichever form it takes, the bottom line is that hate speech is against the principles of human rights and must be addressed to curb the exploding interaction possibilities and less tolerance of indifference. As such, the need to have regulations to help keep the intolerance in check must be emphasized if the problem of hate speech is to be solved.

The Problems to Online Hate Speech

Though cyberspace was initially thought to be borderless, the Internet has become largely platformed by a few technology monopolies like Facebook, Twitter and YouTube. Moreover, despite the fast growth of these platforms, they have not proven their capability and commitment to address issues of public concern. Unfortunately, there is no effective way to regulate digital platforms (Flew, 2021). In the US alone, 41 percent of Internet users claimed to have experienced online harassment, and 18 percent of the cases of harassment reported were severe and characterized by physical threats (Johnson, 2022). Though most cases began as harassment, they degenerated into hate speech and may negatively affect a wide group of audiences. For example, the #gamergate controversy raised both cultural and gender diversity issues as witnessed in video games. The game developers Quinn and Wu and its main critic, Anita, were harassed both on Reddit and Twitter. Moreover, they were forced to run away from their homes for safety concerns after their harassers decided to post information about their home addresses.

Online companies fail to act in time on online hate speech that targets individuals. It was witnessed when Jo Cox, a UK Labor MP, was murdered by Thomas Mair prior to the 2016 European Membership referendum. The report of the Home Affairs Committee of the House of Commons made serious findings on the consistent cases of online hate speech, extremist and abuse content that were targeted at women and people of ethnic and racial minorities posted on social media platforms such as Google, Facebook, and YouTube. It was surprising that even after the committee pointed out to Google the cases of hate speech that had been posted by the right-wing extremist groups, the company did not act immediately. It was unexpected given the unacceptability and inappropriateness of the content on one of the YouTube adverts. Instead, the company chose to moderate the content. The case demonstrates that the platforms act with restraint in the cases of illegal or hateful content and take swift action when content infringes copyright rules.

The same observation was made by Motamoros-Fernandez (2017), who discussed the laxity of social media platforms in addressing racist hate speech as “platformed racism.” The scholar looked at racism facilitated and promoted by social platforms as a unique form of racism that has been generated by the culture of social media platforms through their policies, technical affordance, design, and business models. Social media platforms are criticized because they enhance and manufacture racist discourse. Moreover, their governance model reproduces and fails to address racial inequality. 

A white man accidentally shooting an aboriginal man with a dart while hunting

A good case study on this issue is the 2015 banning of the new Australian Broadcasting Corporation comedy trailer. The video was categorized as offensive since it was against the platform’s nudity policy based on the inclusion of bare chest photos (Flew & Martin, 2022). One of the indigenous activists then reposted the video through her page to criticize the platform’s standards, and other users took advantage of it to exaggerate the post. Ultimately, her account was blocked, and the video was removed. The same thing happened one year later when New Matilda posted the same video and accompanied it with similar images as in the first instance. Although many users suspended the video, Facebook issued a public statement only when the case was reported by the media. Even then, the company’s message was a racial defence of its nudity restriction. It stated that the nudity restriction was placed because the message may have been against the cultural belief of a part of its users.

Facebook was also known for its laxity to ban racist pages that targeted Aboriginal users. This was evident in both 2012 and 2014 when the Online Hate Prevention Institute extensively negotiated with the company to remove certain pages that amounted to racist attacks on Indigenous Australians. In its initial ruling, the company maintained that the pages were not in contravention of its terms of service. Instead of deleting the pages, the creators were forced to rename them. Facebook only yielded to pressure after the involvement of the Australian Communications and Media Authority when it finally blocked the pages (Motamoros-Fernandez, 2017). The blockage was also limited to Australia. The two cases demonstrate that the company lacks Aboriginality and is promotes racial favouritism by using the notion of promotion of freedom of speech. Facebook’s laxity in handling public concerns was also evident in 2018 when Cambridge Analytica, a political consultancy firm, accessed the personal data of over 87 million Facebook users by posing a question to them online (Flew, 2019). The company failed to realize the breach until the data had been sold to Donald Trump and other parties and used for political reasons.

Another challenge has been an overemphasis on the fundamental right to freedom of speech. Free speech is understood as a form of manifesting one’s freedom of thought and an instrument of intellectual advancement, political life, and human development. However, unlike free speech, hate speech is objectionable since it furthers hostility and mistrust within the community. It also lowers the dignity of the victims since it makes it hard for them to take part in the collective life due to the resulting prejudice, intimidation, contempt, and discrimination. The victims are afraid because they can be harassed at any time. As such, stakeholders will only be able to adequately resolve the problem of hate speech after addressing the challenge or dilemma created by its coexistence with the right to freedom of expression.

stop hate speech
stop hate speech

The Solutions to Online Hate Speech

The 19th article of the UN International Convention and Political Rights states that everyone has a legal right to hold an opinion. However, there is a need to align this provision to the context of freedom in the use of online space on various online platforms.

Apart from addressing legal dilemmas, there is a need for legislators to address the identified gaps in the regulatory framework guiding the conduct and operation of the social media giant companies. The legislation should be able to incorporate and clarify issues surrounding public interest obligations of corporations and various companies involved in digital media communication. Other issues to be addressed include ethical standards and their effectiveness in resolving the issues of ethics and trust, especially with the growing digitalization in the modern world.

Social media platforms have also proposed certain measures that can help solve problems related to their use. This is evidenced by the proposal by Rob Goldman, a Facebook executive, who indicated the need for educational, economic, and technical fixes. The executive made the proposal when commenting on the US concern about the disinformation campaign that was run by Russians (Andrejevic, 2019). He proposed the need for countries to teach their citizens critical thinking and digital literacy skills, noting that countries such as Holland, Finland, and Sweden had already taken measures to deal with misinformation.

Another set of solutions was proposed by a report published by the Data & Society Research Institute. The institute recommended the need for incentives each time a network detects fake content through checking facts and verifying services. Such incentives would make media platforms and their users more cautious whenever they intend to write hate speech or harass other users. Fact verification would help ensure that social media platforms act objectively whenever the public or any other party files a complaint (Andrejevic, 2019). Other measures proposed in the report were the need for self-regulation by the main social-media companies and increased media literacy for social media users. For instance, the public should also be empowered to put pressure on the companies to reconfigure their economic incentives. Finally, the report expressed the need for a strategy to curb the increase of politically-polarized misinformation disseminated by automated online systems. Suzor (2019) supported the suggestion, noting that these platforms have a mediating effect on communication. That is, company decisions have a great impact on public culture and the users’ social and political lives.

Finally, Pasquake (2022) suggested the need to revise laws governing individuals, government, and companies. The author noted that the question should not be whether the laws guarantee the freedom of information, trade secrecy, and privacy but who benefits from these laws. Pasquake (2022) observed that some of these laws limit necessary inquiries into online harassment cases and rule out certain important investigations even before they are commenced. 

Conclusion

While it is indeed true that the law of information protects personal privacy, it has been misused by online companies to further their interest. While all violations committed online are recorded and punished, various agencies and business organizations use secret agreements to legalize their actions instead of acting in the interest of the public whom they serve. As long as such an imbalance exists, citizens will continue airing their displeasure online since it is an available channel for them, which will result in a cycle of problems.

 

 

 

 

 

References

Andrejevic, M. (2019). Automated media. Routledge.

Aro, K. (2017). Online hate speech keeps police busy. News. Retrieved 5 April 2022, from https://yle.fi/news/3-9713012.

Dean, S. (2015). Facebook removes video showing nudity in Aboriginal culture. Mail Online. Retrieved 5 April 2022, from https://www.dailymail.co.uk/news/article-3039167/Facebook-struggles-distinguish-offensive-nudity-Aboriginal-culture-pulls-TV-trailer-featuring-indigenous-women-bare-breasts.html.

Flew, T. (2019). Platforms on trial. Intermedia, 46(2), 18-23. https://www.iicom.org/wp-content/uploads/im-july2018-platformsontrial-min.pdf

Flew, T. (2021). Regulating platforms. Medford.

Flew, T., & Martin, F. (2022). Digital platform regulation: Global perspectives on internet governance. Springer International Publishing.

Johnson, J. (2022, January 25). Share of adult internet users in the United States who have personally experienced online harassment as of January 2021. https://www.statista.com/statistics/333942/us-internet-online-harassment-severity/#statisticContainer

Matamoros-Fernández, A. (2017). Platformed racism: The mediation and circulation of an Australian race-based controversy on Twitter, Facebook and YouTube. Information, Communication & Society, 20(66), 930-946, https://doi.org/10.1080/1369118X.2017.1293130

Pasquale, F. (2022). The black box society: The secret algorithms that control money and information. Harvard University Press.

Suzor, N. (2019). Lawless: The secret rules that govern our lives. Cambridge University Press. 

Tackling online hate and trolling. Internet Matters. (2021). Retrieved 5 April 2022, from https://www.internetmatters.org/resources/tackling-online-hate-and-trolling/.

Tomalin, M., & Ullmann, S. (2020). Tackling the Problem of Online Hate Speech – Humanities & Social Change. Hscif.org. Retrieved 5 April 2022, from https://hscif.org/tackling-the-problem-of-online-hate-speech/