Dear readers of the blog,
I hope you’ve been having a fine day exploring the wild world of the web, it’s a place for you to chime in, talk to people, see different worlds, and so many more possibilities of interactions out there. However, the good stuff of the net also comes with the bad stuff that can happen, too. Especially if you choose the life of a poster (someone who publishes content online), you must be familiar with how overwhelming the internet may be. For this entry, I’d like to bring your attention to a critical phenomenon on the web: Hate speech.
This topic has gathered a fair amount of attention across many areas, and rightfully so, it is an issue that deserves some critical assessment on the how, why, and what are we going to do with it going forward. Although this blog post will not attempt to solve all the issues about this theme, the goal is to give readers an introductory framework on how hate speech functions and manifests on digital platforms, as well as the current responses from different stakeholders (as the online discourse is moving fast by the minute). Through looking at the landscape of the social media site Twitter, the analysis will focus on how hate speech is enabled through the platform’s architecture and the following stories that unravel with it.
Welcome to the public sphere
For a social media network such as Twitter, a site that is known for its fast paced microblog, where you can quickly publish a simple sentence that is limited to 280 characters, it’s a nice little forum for users to post something short and straight to the point. As to this point, many of us internet users are familiar with the idea of social media like Twitter – a place where we use to keep in touch with our loved ones, finding new social connections, building your career, and so on. Unlike the physical world where catching a conversation with a stranger is bound to a series of cultural rules, Twitter is a place where you can have easy access to give anyone a word (as long as their profile is set to public). You can easily go through someone’s profile, where they post their thoughts and opinions of the day, give it a little interaction, and (hopefully) your response is reciprocated.
The platform has attracted a variety of social profiles from celebrities, politicians, journalists, scholars to all the ordinary people from all walks of life, as if it’s almost like a mini replication of the physical world, but within our digital devices. Everyone co-exist in this platform, although it’s not always a peaceful atmosphere. Locals of the Twitterverse often like to say about the “discourse”, a term that was meant to be something within the ivory tower of academia, but now has escaped to Twitter where users can participate in public conversations and thought-provoking themes on a daily basis.

Ryan Mac, a tech reporter of The New York Times, joking about how there will never be a good tweet exist on the platform. (@RMac18/ Twitter)
The existence of Twitter discourse is often tiresome to the many, as the site’s limited characters might not be the best place to explain yourself in detail within its characters limit. Knowing the limitations of the platforms, the arguments from Twitter discourse tend to serve the purpose of venting. Despite whatever reason one would want to engage in a Twitter conflict, either out of good intention or just to blow off some steam, rarely a consensus can be made and discourse is always there to stay.
Entering the storm
When it comes to users who live out loud and public on Twitter, the possibility of being viral always persits, either for the better or worse. This access to digital publishing has revolutionised the way we engage with mass media, where there’s little to no barriers to let ourselves be known, unlike the past eras of newspaper, radio, and television. Social media has changed the game in the way we convey our narrative, but also leaving us vulnerable to the power of the crowd.
With its characteristic of a viral platform and its highly interactive mechanism, Twitter is a familiar space for media workers, especially journalists, to promote their work and also seek out news tips and scoop from the audience. However, this has often left them in a sensitive position in dealing with the wave of harassment and hostility in their line of work. An example we can look at is the case of Taylor Lorenz, an American journalist whose reporting is often about tech culture who often suffered through a series of smear campaigns and hate speech on Twitter and other social media outlets . These campaigns are often targeted not just on her journalistic work, but also her personal life and extended to whoever is affiliated with Taylor, including her online followers. The waves of hate speech and violent threats that she had to endure caused not only psychological distress, but also affected the way journalism and media organisations function under the public’s landscape.

An interview segment on how journalists, especially women being the overrepresented demographic for online hate (Meet the press/Twitter)
More concerningly, Taylor’s identity as a woman makes her a high target for online misogynistic abuse, as it is a prevalent issue for women on the internet as a whole. There have been numerous reports on how misogynistic attacks and abuse have made the space to be unsafe for women for years but yet the issue still persisted till this day. Such attacks are not only limited to Twitter, but it’d also spread across platforms namely 4chan and Reddit, the toxicity of such technoculture is often hard to contain due to the fluidity and connectedness of internet platforms.
The architecture of aggression
Platform designs and architecture can be a crucial factor to shape users behavior and interaction patterns. Aside from Twitter’s constraints on short responses and hyper messaging visibility, the site’s algorithm can be manipulated to incite the audience’s aggressive engagement. Making controversies and conflicts being the driving force for the discourse machine on the timeline. The incentive for such engagement is often from explained through the theory of surveillance capitalism, coined by Shoshana Zuboff, where Big Other like tech platforms (such as Twitter) would prompt users to engage in their technology at all cost in order to mine for data and capital revenues that come from the service.
By exploiting the users psychological response to make them interact with the site at all cost, this has made regulars of the site to be aware of the mental health toll of the site, where users develop personal strategies to stay safe and sane. Furthermore, the case of combating hate speech and online harassment don’t stop with individual solutions, online communities would find themselves to push back the hateful messages by flagging or counterspeech when attacks like such happen. Still, the issue of hate speech remains a thorny issue that would require a macro intervention beyond its users’ capabilities.
Duty of care
The subject of hate speech has gathered a fair amount of attention from both the offline world and tech companies to be wary about the future of platform governance. It appears that the digital space is no longer a separated world from our reality, it is becoming a part of our modern reality. In the end, behind an online account is a person that sits behind the screen and consumes all that information on the receiving end, it’s the human aspect of technology that we might underestimate at times.
Despite its known harm, measures to measure and even define hate speech has been a grueling debate amongst scholars, policy makers, and industry experts. One of the policy proposals to the platform is to adopt the legal framework of duty of care, where platforms are suggested to take a responsibility to moderate and oversee its users. In the meantime, there have been several countries such as Australia, New Zealand, and the UK that proposed legislation focusing on Online Safety strategies, where the idea is to establish a better physical law upon the digital landscape. However, this adoption won’t be ideal, as there are additional concerns about how categorisation and regulations are going to be decided and applied by tech companies. To build a legal framework over monitoring speech can be an existential challenge, as there’s a need to balance between personal expressions and collective sustainability. Hate speech as a phenomenon, has now raised questions on various issues such as freedom of speech, national security, and the values of Western democracy.
From digital to physical
There is no denying the complicated landscape of social media and its interaction with our physical world as things progress. Even though there’s been general skepticism on how common hate speech attacks are for the online population at large, there is no doubt about how they’re often amplified and exposed to users quite often. Studies have suggested that such attacks tend to find its way to draw as much attention as possible by targeting public targets to gain traction beyond their dark corners, this explains how media figures such as Taylor Lorenz can be an easy target for them to go after. The ramification of hate speech targeting marginalised identity groups have made real effects on discrimination, offline agrression, undermining democracy trust, affecting both the psychological and physical beings for victims. As these online issues are only a part of the real life dangers of discrimination, but only now transferring beyond our known boundaries.
The complex network of hate, after all, is a fundamentally human issue that technology only makes it even more visible and well-recorded. Indeed, our ongoing efforts to protect the vulnerable and prevent harm through the existing social structure like schools, hospitals, the law are hard enough to manage, but now the online sphere has emphasised the need to be better in caring for each other. With its existence for the last two decades, the online world is no longer a hidden place for a few people in their garage but now scaled up to a myriad of structures beyond the neighborhood, cities, and nations. World wide web, or I’d often like to say, world wild web, is in a need to solve its contentious issues of violence like such. Although it won’t be easy, but changes are sure underway as we’re scrolling through this newfound cyber reality.
*By the time I am writing this blog entry, the story around Twitter and journalists are evolving. The New York Times issued a new social media policy that advises journalists to limit their time on the platform. To no one’s surprise, publishing agencies and the media section of Twitter are wildly giving out resolutions, thread analysis, and (my favourite part) inside jokes in order to ride with the discourse of online harassment and hate speech.
________________________________________________________
Reference:
Baker, S. A., Wade, M., & Walsh, M. J. (2020). The challenges of responding to misinformation during a pandemic: content moderation and the limitations of the concept of harm. Media International Australia, 177(1), 103–107. https://doi.org/10.1177/1329878×20951301
Castaño-Pulgarín, S. A., Suárez-Betancur, N., Vega, L. M. T., & López, H. M. H. (2021). Internet, social media and online hate speech. Systematic review. Aggression and Violent Behavior, 58, 101608. https://doi.org/10.1016/j.avb.2021.101608
Dreyfuss, E. (2022, March 4). What the Harassment of Journalist Taylor Lorenz Can Teach Newsrooms. Retrieved from https://mediamanipulation.org/research/what-harassment-journalist-taylor-lorenz-can-teach-newsrooms
Gelber, K. (2021, July 14). A better way to regulate online hate speech: require social media companies to bear a duty of care to users. Retrieved from https://theconversation.com/a-better-way-to-regulate-online-hate-speech-require-social-media-companies-to-bear-a-duty-of-care-to-users-163808
Hess, A. (2017, June 14). Why Women Aren’t Welcome on the Internet. Retrieved from https://psmag.com/social-justice/women-arent-welcome-internet-72170
Holloway, D. (2019, June 24). Explainer: what is surveillance capitalism and how does it shape our economy? Retrieved from https://theconversation.com/explainer-what-is-surveillance-capitalism-and-how-does-it-shape-our-economy-119158
Johnson, A. J., & Cionea, I. A. (2020). An Exploratory Mixed-Method Analysis of Interpersonal Arguments on Twitter. Twitter, the Public Sphere, and the Chaos of Online Deliberation, 205–231. https://doi.org/10.1007/978-3-030-41421-4_9
Kunst, M., Porten-Cheé, P., Emmer, M., & Eilders, C. (2021). Do “Good Citizens” fight hate speech online? Effects of solidarity citizenship norms on user responses to hate comments. Journal of Information Technology & Politics, 18(3), 258–273. https://doi.org/10.1080/19331681.2020.1871149
Massanari, A. (2016). #Gamergate and The Fappening: How Reddit’s algorithm, governance, and culture support toxic technocultures. New Media & Society, 19(3), 329–346. https://doi.org/10.1177/1461444815608807
Meet the Press. (2022, April 1). 1 in 3 women under the age of 35 have experienced harassment online, including female journalists. [Video file]. Retrieved from https://twitter.com/MeetThePress/status/1509957102860738563
Perlberg, S. (2022, April 7). LEAKED MEMO: The New York Times has issued a Twitter “reset,” urging reporters to “meaningfully reduce” how much time they spend on the platform. Retrieved from https://www.businessinsider.com/new-york-times-issues-twitter-reset-for-reporters-2022-4?international=true&r=US&IR=T
@RMac18. (2022, April 7). How to do good tweets: Step 1. Never tweet. [Tweet]. Retrieved from https://twitter.com/RMac18/status/1512098586737070082?s=20&t=BrmvgQsXMEfrDuWGvPBg9w
Siegel, A. (2020). Online Hate Speech. Social Media and Democracy, 56–88. https://doi.org/10.1017/9781108890960
Walsh, M. J., & Baker, S. A. (2021, August 30). Twitter’s design stokes hostility and controversy. Here’s why, and how it might change. Retrieved from https://theconversation.com/twitters-design-stokes-hostility-and-controversy-heres-why-and-how-it-might-change-166555
Woods, L., & Perrin, W. (2021). Regulating Big Tech: Policy Responses to Digital Dominance. In M. Moore & D. Tambini (Eds.), Obliging Platforms to Accept a Duty of Care (pp. 93–109).