
Nowadays, it is second nature to check Facebook, Instagram, and other apps to connect with our family members, friends, colleagues, and even strangers. Increasingly, people search for fame and money by livestreaming videos (Wang, 2020, p.549), becoming influencers to gain mainstream media attention (Abidin, 2016, p.3), sponsoring products (Hughes et al, 2019, p.78), or producing clickbait materials for profits (Benkler et al, 2018, p.9).
I argue online platforms must be regulated to combat online abusive posts, because harassment and hate speech can affect anybody, such as Islamophobic attacks by influencers against Muslim communities (Pintak et al, 2021, p.4), or attacks against women influencers who statistically suffer more attacks than their male counterparts, especially women of ethnic and religious minorities (Cherici, 2021).
I want to point out the dangers to you, so you can protect yourself, your family and friends, and even people you do not know.

Background
The internet’s birth was touted as harbinger of harmony and prosperity because, “technology and the economics of abundance would erase social and economic inequality” (Karpf, 2018), but alas, people are increasingly staying in their own filter bubble (Andrejevic, 2019, p.47). Filter bubble is also called “epistemic bubble” and it is a phenomenon where differing viewpoints are omitted, whereas an “echo chamber” is where other views are “actively discredited” (Nguyen, 2020, p.142).
However, the convenience and anonymity nature of the internet means it is easier to bully people online, inflict harassment and even crossing into hate speech territory (Matamoros-Fernández and Farkas, 2021, p.218), by antagonising others for possessing opposing views online, and subsequently use language that is violent, inappropriate or outright hateful (Cinelli et al, 2021, p.10). In fact, it is argued that Donald Trump’s Truth platform has few users because everyone has similar views; consequently, Donald Trump’s supporters find interactions on Truth to be unappealing just by virtue of having no one to argue with (McGraw and Kern, 2022).
Sinister things such as stalking, revenge porn and sexting, child pornography, cyberviolence, or cyberbullying occur online. Hate speech elements or hate crime incitements is so pervasive in Ethiopia that the “criminality is often disregarded”, but can inflict real physical harm and mental harm (Mossie and Wang, 2020, p.2). Social norms means that in Japan and South Korea, anonymity begat “wang-ta” and “ijime” in the form of cyberbullying that proliferates online due to cultural pressures to conforming to others in the society (Toda et al, 2020, p.32). In fact, perpetrators often disguise abusive comments by saying they were joking and using scientific language (Flew, 2021 p.92), and indeed, free speech is more treasured than the collateral damage they can inflict (Bowers and Zittrain, 2021, p.3-4).
Consequently, social media platforms are increasingly pressured to moderate contents and ban perpetrators, such as Facebook’s actions after the 2016 Federal Election in the US (Garrett and Poulsen, 2019, p.241), but the amount of data generated is overwhelming for any platform, so they now employ moderators, and community volunteer moderators who understand their respective communities and can be more effective in policing the posts (Malinen, 2021, p.74), plus allowing anyone to flag and report offensive materials (Crawford and Gillespie, 2016, p.412). However, while users can flag offensive posts, bystander-apathy and cynical views of the state of affairs in America and the American view of free speech means that many hate-speech posts are still not flagged, especially anti-LGBTIQ or sexist attacks (Guo and Johnson, 2020, p.9).
Positive moderation processes are possible because active moderators can enforce site rules and provide support concurrently, as highlighted by the Supporting Our Valued Adolescents (SOVA) case study of online adolescent mental health intervention project (Windler et al, 2019, p.9). Additionally, people should aspire to become “opinion leaders” by honing fact-checking skills, instead of risking misinformation by relying on other people to provide information and news (Dubois et al, 2020, p.11).
Hate Speech Detector Algorithm
Another resource is the hate speech detector algorithm, but deep learning is imperfect, due to “user over-fitting” and little research into cross-lingual ability of the algorithm (Arango et al, 2022, p.9), or by employing “typos, change word boundaries or add innocuous words to the original hate speech.” (Gröndahl et al, 2018, p.1). Furthermore, while platforms and government can censor contents, users can circumvent measures to find or commit the abusive actions, as highlighted by the attempts of online platforms in removing posts about a terror attack in New Zealand which had limited success, because users evaded the manual checks by posting materials as photos instead of texts (MacAvaney et al, 2019, p.13).
Encouragingly, hate speech detection can identify vulnerable communities by analysing Facebook posts in Amharic language (Mossie and Wang, 2020, p.14), or another study about the MoH (Map Only in Hindi) which researched code-switching between Hindi and English used in online speeches, including hate speeches by using “word-based transliteration pipeline” but also requires refinement to minimise errors (Sharma et al, 2022, p.20). Additionally, Twitch employs both AI and human moderators to monitor chats for hateful speech, but AI can miss emotes used by savvy users to expressing toxicity, and conversely, human moderators cannot monitor large amount of chats in real-time (Kim et al, 2022, p.1).
Governments, Stakeholders and Platforms’ Roles
So, how about government then? Do they play any role in governance of the platforms and the contents posted on the social media’s apps? Previously, the platforms were classified as carriers but not liable for the contents through the article 230 of The Communications Decency Act of 1996 (CDA) in the U.S. (Nurik, 2019, p.2879). However, let us not ignore the platforms’ resistance to regulations, preferring self-regulation to keep the ‘safe harbour’ status (Bossio et al, 2022, p.3). These days, platforms are blurring the types of businesses monopolised, and such fluidity requires the resisted regulations due to their supremacy in disrupting traditional models (Van Dijck, 2018, p.19), and expanding into Artificial Intelligence and Internet of Things fields, which means further regulations are necessary (Barwise and Watkins, 2018, p.24).
Additionally, platforms deny liability and responsibility by claiming non-partisanship and taking a hands-off approach in the name of openness (Gillespie, 2017, p.256-257). In the process, platforms allow toxic masculinity to attack women online, such as making threats of sexual and physical violence or sending pornographic images and allowing perpetrators to operate freely without consequence (Victor, 2022). In fact, platforms are not neutral as they make the rules of engagement with everything the users do on their platforms, including suspending profiles and removing contents of political activists with offensive English names (Suzor, 2019, p.12). In fact, platforms are inconsistent in self-regulation and can perpetuate white colonialist social construct, evidently when Facebook blocked Australian Broadcasting Corporation’s program containing traditional nudity in 2015 (Matamoros-Fernández, 2017, p.930).
Increasingly, through the users, stakeholders and legislations, social media companies have started to police and censor offensive materials (Malinen, 2021, p.74). While platforms claim they are not traditional media companies and thus exempt from regulations, but the public demand oversight and thus co-regulation, soft laws and cooperation from other stakeholders may be advantageous (Flew, 2018, p.28). Free speech should be balanced and regulated to ensure civil discussions and requires governments to work with the platforms to develop code of practice and other regulations (Antonetti and Crisafulli, 2021, p.1645).
In response, the platforms created ‘The Global Internet Forum to Counter Terrorism’ and ‘The Facebook Oversight Body’ to show to the public and governments that they are serious in self-regulation, but again, these also need to be examined and studied (Gorwa, 2019, p.9). Indeed, secrecy in the algorithms employed by the platforms increase the urgency to regulate them for the purpose of transparency and uncovering black box practices to understand what we are getting ourselves into when we use the platforms and their services (Pasquale, 2015, p.8). Even whistleblower Christoper Wylie, who worked with Cambridge Analytica, and in 2018 revealed the practices of data harvesting of Facebook’s users, commented three years after his revelations that “nothing has changed” (Milmo, 2021).
On another matter, people attack other people online because they feel invincible and cannot be found or identified (Kanetsuna and Ishihara, 2020, p.115); for example, the Finnish Ylilauta online forum for hikikomori, a Japanese term for extremely socially withdrawn people, where they engage in anonymous online interactions, have been labelled as the growing bed of hate speech, alt-right, misogynistic views and extremisms (Vainikka, 2020, p.597), and in Japan, there is the Japanese alt-right netto uyoku subculture which also began radicalisation, conspiration and hate speech towards Korea and China online (Hermansson et al, 2020, p.209). Nevertheless, governments and institutions must be active in combating toxic online behaviours. In South Korea for example, there are apps, such as the 117 Chat app developed and utilised by Metropolitan Police Agency to monitor and report bullying, and there are legislations, such as the “Youth Protection Act” that serve as the framework to protect children from bullying (Oh, 2020, p.87).

Interventions
Additionally, in Australia, the eSafety Commissioner was established through the Enhancing Online Safety Act (2015) that complement organisations, educational programs or cyber awareness programs, which shows that parliaments are increasingly regulating negative online behaviours, including cyberbullying (Page Jeffery, 2021, p.2). The eSafety Commissioners have the ability to compel social media platforms and websites to remove offensive materials within 24 hours or risk penalties, including offensive materials involving children (Taylor, 2022). This would be beneficial to remove materials such as anorexia-related contents shown to children on Instagram and Facebook as their algorithm could inflict mental health harm, but Facebook does nothing about it, as Facebook whistleblower Frances Haugen testified to US senators (Milmo, 2021).
Regulation and oversight are necessary when self-regulation is deficient, and when platforms keep apologising and promising to do better but livestreams of murders, sexual violence and other scandals ad nauseam (Flew and Gillett, 2021, p.232). Furthermore, platforms may simply be acting to ensure their own survival, not out of public good interests because it was reported that Meta, the parent company of Facebook engaged Targeted Victory to engage in campaigns by using “both genuine concerns and unfounded anxieties to cast doubt” to attack their competitor Tik Tok, and Mark Zuckerberg used TikTok to deflect attention of Facebook’s monopoly on the social media market (Lorenz and Harwell, 2022),
Let us remember that real-world interventions to combat prejudice, hate-speech, cyberbullying and others issues must happen through teaching critical thinking skills, public media campaign to combat negative online behaviours, and flagging of posts and even shut down profiles peddling toxic posts (Miškolci et al, 2020 p.130). Cyberbullying can cause suicide ideation, so the risk of harm and deaths is real, because a case study in Hong Kong found that adolescent cyberbullying victims are 2.48 times more at risk of committing suicide than their peers who do not experience cyberbullying (Chang et al, 2019, p.271). Similar case studies in Japan also found that cyberbullying is a risk factor for depression and suicide risk because 20% of cyberbullying victims attempted suicide even though only 1.8% of Japanese adolescents experienced cyberbullying (Nagamitsu et al, 2020, p.5).
Real-world implications means parents, teachers and peers should practise timely interventions and monitor signs of victimisation, instead of ignoring or downplaying cyberbullying acts and other forms of online abuse (Bai et al, 2021, p.7). Furthermore, contextualised interventions should be applied as counselling maybe ineffective unless underlying issues are addressed, as highlighted with the case of social media workplace bullying and abuse of university academics (Farley and Coyne, 2018, p.176).
Additionally, patriarchal social construct, toxic masculinity and gendered-violence in the society that overwhelmingly affects women and minorities must be addressed. For instance, victims of revenge porn are often women who get labelled as irresponsible for sending sexts, while contrarily, the responsibility of whoever shared the sexts to the broader community is anonymous but either way, the women bear the ultimate responsibility for sending the sexts in the first place (Pavón-Benítez et al, 2021, p.12). Consequently, reforms to handle sexting and revenge porn should be legislated, though it is noted that such reforms in Canada and the US still have challenges regarding the mechanisms, who should be prosecuted, and the age of the people involved (Lee and Darcy, 2020, p.571). However, policies should form an umbrella so everyone can coexist in harmony in a world where the internet is increasingly becoming indispensable (Weber, 2010, p.23).
Parting Thoughts
Be vigilant and care for each other online, and flag and report offensive materials. We must learn about the way platforms work, and demand regulations as self-regulation is not enough for transparency, especially as platforms are increasingly vital for everyday life. Slowly, regulations and apps are coming to fruition, such as the eSafety Commissioner, the 117 Chat app and public awareness campaigns.
Lastly, we must prioritise tackling real-world structural issues such as gendered-violence, racism, prejudice and partisanships. Certainly, we must protect ourselves and each other, as nobody deserves vitriol from abusive trolls and anonymous post, as online hate happens because of real-world events and online hate causes real-world injury.
So what do you think? How can we stop online hate speech? Share your thoughts in the comments below.
References:
Abidin, C. (2016). “Aren’t These Just Young, Rich Women Doing Vain Things Online?”: Influencer Selfies as Subversive Frivolity. Social Media + Society, 2(2), 205630511664134–. https://doi.org/10.1177/2056305116641342
Andrejevic, M. (2019), ‘Automated Culture’, in Automated Media. London: Routledge, pp. 45-72.
Antonetti, P., & Crisafulli, B. (2021). “I will defend your right to free speech, provided I agree with you”: How social media users react (or not) to online out‐group aggression. Psychology & Marketing, 38(10), 1633–1650. https://doi.org/10.1002/mar.21447
Arango, A., Pérez, J., & Poblete, B. (2022). Hate speech detection is not as easy as you may think: A closer look at model validation (extended version). Information Systems (Oxford), 105, 101584–. https://doi.org/10.1016/j.is.2020.101584
Bai, Q., Huang, S., Hsueh, F.-H., & Zhang, T. (2021). Cyberbullying victimization and suicide ideation: A crumbled belief in a just world. Computers in Human Behavior, 120, 106679–. https://doi.org/10.1016/j.chb.2021.106679
Barwise, P. & Watkins, L. (2018) ‘The Evolution of Digital Dominance: How we got to GAFA’, in M. Moore & D. Tambini (eds.), Digital Dominance: The Power of Google, Amazon, Facebook, and Apple. Oxford: Oxford University Press, pp. 21-4
Benkler, Y., Faris, R. & Roberts. H. (2018) Network Propaganda: Manipulation, Disinformation, and Radicalization. New York: Oxford University Press, pp. 3-43.
Bossio, D., Flew, T., Meese, J., Leaver, T., & Barnet, B. (2022). Australia’s News Media Bargaining Code and the global turn towards platform regulation. Policy and Internet. https://doi.org/10.1002/poi3.284
Bowers, J. & Zittrain, J.(2021) Answering impossible questions: Content governance in an age of disinformation. Harvard Kennedy School Misinformation Review 1(1), pp. 1-8.
Chang, Q., Xing, J., Ho, R. T. ., & Yip, P. S. . (2019). Cyberbullying and suicide ideation among Hong Kong adolescents: The mitigating effects of life satisfaction with family, classmates and academic results. Psychiatry Research, 274, 269–273. https://doi.org/10.1016/j.psychres.2019.02.054
Cherici, S. (2021) Mirroring Bias: Online Hate Speech and Polarisation. Green European Journal. Retrieved from: https://www.greeneuropeanjournal.eu/mirroring-bias-online-hate-speech-and-polarisation/
Cinelli, M., Pelicon, A., Mozetič, I., Quattrociocchi, W., Novak, P. K., & Zollo, F. (2021). Dynamics of online hate and misinformation. Scientific Reports, 11(1), 22083–22083. https://doi.org/10.1038/s41598-021-01487-w
Crawford, K., & Gillespie, T. (2016). What is a flag for? Social media reporting tools and the vocabulary of complaint. New Media & Society, 18(3), 410–428. https://doi.org/10.1177/1461444814543163
Dubois, E., Minaeian, S., Paquet-Labelle, A., & Beaudry, S. (2020). Who to Trust on Social Media: How Opinion Leaders and Seekers Avoid Disinformation and Echo Chambers. Social Media + Society, 6(2), 205630512091399–. https://doi.org/10.1177/2056305120913993
Farley, S. and Coyne, I. (2018) INTERVENING AGAINST WORKPLACE CYBERBULLYING. In Cassidy, W., Faucher, C. and Jackson, M. (eds) (2018) Cyberbullying at University in International Contexts. Routledge. Pp.173-177
Flew, T. (2018) ‘Platforms on Trial’, Intermedia 46(2), pp. 18-23. Retrieved from: https://www.iicom.org/wp-content/uploads/im-july2018-platformsontrial-min.pdf
Flew, T. (2021). Regulating platforms. Cambridge, UK: Polity.
Flew, T., & Gillett, R. (2021). Platform policy: Evaluating different responses to the challenges of platform power. Journal of Digital Media & Policy, 12(2), 231–246. https://doi.org/10.1386/jdmp_00061_1
Garrett, R. K., & Poulsen, S. (2019). Flagging Facebook Falsehoods: Self-Identified Humor Warnings Outperform Fact Checker and Peer Warnings. Journal of Computer-Mediated Communication, 24(5), 240–258. https://doi.org/10.1093/jcmc/zmz012
Gillespie, T. (2017) ‘Governance by and through Platforms’, in J. Burgess, A. Marwick & T. Poell (eds.), The SAGE Handbook of Social Media, London: SAGE, pp. 254-278.
Gorwa, R. (2019) ‘The platform governance triangle: Conceptualising the informal regulation of online content’, Internet Policy Review 8(2)
Gröndahl, T., Pajola, L., Juuti, M., Conti, M., & Asokan, N. (2018). All You Need is “Love”: Evading Hate Speech Detection. Proceedings of the 11th ACM Workshop on Artificial Intelligence and Security, 2–12. ACM. https://arxiv.org/pdf/1808.09115.pdf
Guo, L., & Johnson, B. G. (2020). Third-Person Effect and Hate Speech Censorship on Facebook. Social Media + Society, 6(2), 205630512092300–. https://doi.org/10.1177/2056305120923003
Hermansson, P., Lawrence, D., Mulhall, J., & Murdoch, S. (2020). Japan and the Alternative Right. In The International Alt-Right (1st ed., pp. 207–217). Routledge. https://doi.org/10.4324/9780429032486-15
Hughes, C., Swaminathan, V., & Brooks, G. (2019). Driving Brand Engagement Through Online Social Influencers: An Empirical Investigation of Sponsored Blogging Campaigns. Journal of Marketing, 83(5), 78–96. https://doi.org/10.1177/0022242919854374
Kanetsuna, T. and Ishihara, K. (2020). Challenging moral disengagement caused by anonymity Japanese preventive practices. In Toda, Y and Oh. I. (eds) (2020) Tackling Cyberbullying and Related Problems – Innovative Usage of Games Apps and Manga, pp.103-117. Routledge.
Karpf, D. (2018) ’25 Years of WIRED Predictions: Why the Future Never Arrives’. WIRED. Retrieved from: https://www.wired.com/story/wired25-david-karpf-issues-tech-predictions/
Kim, J., Wohn, D. Y., & Cha, M. (2022). Understanding and identifying the use of emotes in toxic chat on Twitch. Online Social Networks and Media, 27, 100180–. https://doi.org/10.1016/j.osnem.2021.100180
Lee, J. R., & Darcy, K. M. (2020). Sexting: What’s Law Got to Do with It? Archives of Sexual Behavior, 50(2), 563–573. https://doi.org/10.1007/s10508-020-01727-6
Lorenz, T. and Harwell, D. (2022) Facebook paid GOP firm to malign TikTok. The Washington Post. Retrieved from: https://www.washingtonpost.com/technology/2022/03/30/facebook-tiktok-targeted-victory/
MacAvaney, S., Yao, H.-R., Yang, E., Russell, K., Goharian, N., & Frieder, O. (2019). Hate speech detection: Challenges and solutions. PloS One, 14(8), e0221152–e0221152. https://doi.org/10.1371/journal.pone.0221152
Malinen, S. (2021). Boundary control as gatekeeping in facebook groups. Media and Communication (Lisboa), 9(4), 73–81. https://doi.org/10.17645/mac.v9i4.4238
Matatoros-Fernandez, A. (2017). Platformed racism: the mediation and circulation of an Australian race-based controversy on Twitter, Facebook and YouTube, Information, Communication & Society 20(6), pp. 930-946.
Matamoros-Fernández, A., & Farkas, J. (2021). Racism, Hate Speech, and Social Media: A Systematic Review and Critique. Television & New Media, 22(2), 205–224. https://doi.org/10.1177/1527476420982230
McGraw, M and Kern, R. (2022). MAGA-world fails to flock to Truth Social. Politico. Retrieved from: www.politico.com/news/2022/03/09/trumps-truth-social-fails-to-make-a-splash-in-maga-world-00015427
Milmo, D. (2021). Frances Haugen takes on Facebook: themaking of a modern US hero. The Guardian Australia. Retrieved from: https://www.theguardian.com/technology/2021/oct/10/frances-haugen-takes-on-facebook-the-making-of-a-modern-us-hero
Miškolci, J., Kováčová, L., & Rigová, E. (2020). Countering Hate Speech on Facebook: The Case of the Roma Minority in Slovakia. Social Science Computer Review, 38(2), 128–146. https://doi.org/10.1177/0894439318791786
Mossie, Z., & Wang, J.-H. (2020). Vulnerable community identification using hate speech detection on social media. Information Processing & Management, 57(3), 102087–. https://doi.org/10.1016/j.ipm.2019.102087
Nagamitsu, S., Mimaki, M., Koyanagi, K., Tokita, N., Kobayashi, Y., Hattori, R., … Croarkin, P. E. (2020). Prevalence and associated factors of suicidality in Japanese adolescents: Results from a population-based questionnaire survey. BMC Pediatrics, 20(1), 467–467. https://doi.org/10.1186/s12887-020-02362-9
Nguyen, C. T. (2020). Echo Chambers and Epistemic Bubbles. Episteme, 17(2), 141–161. https://doi.org/10.1017/epi.2018.32
Nurik, C. (2019). “Men Are Scum”: Self-Regulation, Hate Speech, and Gender-Based Censorship on Facebook. International Journal of Communication (Online), 2878–2898.
Oh, I. (2020). The application of anti-bullying smartphone apps for preventing bullying in South Korea. In Toda, Y and Oh. I. (eds) (2020) Tackling Cyberbullying and Related Problems – Innovative Usage of Games Apps and Manga, pp.87-102. Routledge.
Page Jeffery, C. (2021). “[Cyber]bullying is too strong a word…”: Parental accounts of their children’s experiences of online conflict and relational aggression. Media International Australia Incorporating Culture & Policy, 1329878–. https://doi.org/10.1177/1329878X211048512
Pasquale, F. (2015). ‘The Need to Know’, in The Black Box Society: the secret algorithms that control money and information. Cambridge: Harvard University Press, pp.1-18.
Pavón-Benítez, L., Romo-Avilés, N., & Tarancón Gómez, P. (2021). “In my village everything is known”: sexting and revenge porn in young people from rural Spain. Feminist Media Studies, 1–17. https://doi.org/10.1080/14680777.2021.1935290
Sharma, A., Kabra, A., & Jain, M. (2022). Ceasing hate with MoH: Hate Speech Detection in Hindi–English code-switched language. Information Processing & Management, 59(1), 102760–. https://doi.org/10.1016/j.ipm.2021.102760
Suzor, N. P. 2019. ‘Who Makes the Rules?’. In Lawless: the secret rules that govern our lives. Cambridge, UK: Cambridge University Press. pp. 10-24.
Taylor, J. (2022). How will new laws help stop Australians being bullied online? The Guardian Australia. Retrieved from: https://www.theguardian.com/media/2022/jan/23/how-will-new-laws-help-stop-australians-being-bullied-online
Toda, Y., Oh, I., Tsuruta, T., Kanetsuna, T. (2020). Theories and research on traditional/cyberbullying and Internet-mediated problems. In Toda, Y and Oh. I. (eds) (2020) Tackling Cyberbullying and Related Problems – Innovative Usage of Games Apps and Manga, pp.17-40. Routledge.
Vainikka, E. (2020). The anti-social network: Precarious life in online conversations of the socially withdrawn. European Journal of Cultural Studies, 23(4), 596–610. https://doi.org/10.1177/1367549418810075
Van Dijck, J. (2018) The Platform Society as a Contested Concept. In Van Dijck, J., Poell, T. & de Waal, M. (eds) (2018) The Platform Society. Oxford: Oxford University Press, pp. 5-32
Victor, D. (2022). Cesspool of misogyny: Instagram accused of failing high-profile women. Sydney Morning Herald. Retrieved from: https://www.smh.com.au/technology/cesspool-of-misogyny-instagram-accused-of-failing-high-profile-women-20220407-p5abj2.html
Wang, S. (2020) Chinese gay men pursuing online fame: erotic reputation and internet celebrity economies, Feminist Media Studies, 20:4, 548-564, DOI:10.1080/14680777.2020.1754633
Weber, R. H. (2010). Shaping Internet Governance: Regulatory Challenges. Berlin, Heidelberg: Springer Berlin Heidelberg. https://doi.org/10.1007/978-3-642-04620-9
Windler, C., Clair, M., Long, C., Boyle, L., & Radovic, A. (2019). Role of moderators on engagement of adolescents with depression or anxiety in a social media intervention: Content analysis of web-based interactions. JMIR Mental Health, 6(9), e13467–e13467. https://doi.org/10.2196/13467
Image references:
“The cookies of the internet” by Kalexanderson (Alexanderson, K.) (2010) is marked with CC BY-NC-SA 2.0. Retrieved from: https://wordpress.org/openverse/image/fe808e6f-340d-4935-bf90-40c6f718af6e, https://www.flickr.com/photos/45940879@N04/5277334834
“Free Speech in Gaza” by HonestReporting.com (2014) is marked with CC BY-SA 2.0. Retrieved from: https://www.flickr.com/photos/66635826@N04/14629183549, https://wordpress.org/openverse/image/84bca7aa-8c10-44a2-9bdb-b61379bb5cf4/
“stop light halloween costume” by woodleywonderworks (2011) is marked with CC BY 2.0. Retrieved from: https://www.flickr.com/photos/73645804@N00/6301078974, https://wordpress.org/openverse/image/d12942bd-a5e6-457c-be26-65e2710447a3