Introduction
Over the past two decades, mega social media platforms such as Facebook, Twitter and YouTube have burgeoned, rapidly gathering billions of active users. These platforms of unprecedented scale have created a new dimension of the public sphere, allowing people all over the world to express their opinions and exchange views on what is happening in their societies and between countries, from anywhere and at any time (Smith & Niker, 2021). Yet, it has also become a hotbed for the spread of hate, abuse and extremist speech. The rise in the number of prosecutions for online hate crimes supports this assertion; in the UK, 1,209 people were convicted under the Communications Act in 2014, compared to 143 in 2004. In addition, YouTube reported a 25% increase in ‘flagged’ content on the platform year-on-year (House of Commons Home Affairs Committee, 2017). Carl Miller, from the think tank Demos, investigated and came to an even more dire conclusion: hate and extremist speech is growing in tandem with the exponential growth of all social media.
Notwithstanding, such an alarmingly growing trend is not being monitored and controlled, even when such speech or dissemination is deemed to be illegal. This has something to do with the lack of legislation related to the internet, the fifth space, by governments, and the inability of regulators to enforce the law. However, do we deny that it should be up to these biggest, richest social media companies in the world to undertake the primary duty of care? Given their enormous size, ample resources and global reach, it is completely irresponsible for these social media platforms to shamefully fail to take sufficient action to address illegal and harmful content, to ensure the safety of their users and even to comply with local national laws (House of Commons Home Affairs Committee, 2017). As Stone (2021) discusses in her post, platforms have a responsibility to be mindful of the services they provide. This is because social media companies are in the business of not only providing content, but also of how users are led.
I will then begin by describing the nature of social media platforms as a complex of software and commercial systems, and the characteristics of the new public sphere that it itself constructs. This is followed by a derivation of conclusions about why I believe that governments must make it mandatory for social media platforms to comply with the duty of care, and why this starts with service design.
In the guise of a grand community standards, platforms essentially adhere to a Laissez-faire policy
Mega social media platforms such as YouTube, Facebook and Twitter have formulated their own community standards policies. As places where people’s voices are targeted, the platforms share one of the most important common denominators, a commitment to expression. A pledge that every user has the right to free expression. A typical example is what the Facebook community standards policies state about the right to speak: People are able to talk openly about any issue that concerns them here, and these contents are allowed as long as they are newsworthy or in the public interest, even if some people may disagree or resent the discussion (Facebook Community Standards policies, 2021).
These regulatory policies may seem grandiose, but in essence, they are veiled language traps. It is a cloak for laissez-faire policies. We often overlook the obvious fact that platforms themselves are a combination of software and commercial systems that aim to be the focus of attention rather than the quality of content (Woods & Perrin, 2021). As a result, they are eager to gain more attention and traffic, and the scale of discussion on the topic of hate and extremist speech will attract more users to post their views on the platform, and the number of retweets and clicks will increase. Perhaps you can thereby discover the sensational truth that social media companies can profit from hate. Both Twitter and YouTube have consistently shown a reluctance to remove tweets and videos that cross the line, although they will act quickly to remove videos that infringe copyright.
Death can’t flee social media hate speech —— Jo Cox ‘deserved to die’

The whole case is closely linked to the EU referendum in the UK. In the BBC News report, Thomas Mair first shot Jo Cox twice while shouting ”Britain first”. After she fell to the ground, Mair kicked and stabbed her several times before firing a third shot. Shockingly, the brutal act was widely praised on the internet. Many people have spoken out in solidarity with Thomas Mair, calling him a ‘hero’ and a ‘patriot’.
Dr Imran Awan and Dr Irene Zempi (2016) focused on more than 53,000 tweets, spanning the month after Jo Cox’s murder and the EU referendum. The report found tens of thousands of people celebrating Jo Cox’s death on Twitter, after the British female MP was shot dead in the street. This was because she supported a political stance for Britain to stay in the EU and was active in lobbying. Other hashtags with high relevance to the 53,000 tweets included: #MakeBritainWhite; #DefendEurope; #StopImmigration; #BanIslam; #ExpelAllMuslims (“Research finds MP Jo Cox’s murder was followed by 50,000 tweets celebrating her death“, 2016).
The platforms’ espousal of free speech and indulgence in hate speech has allowed for a growing political echo chamber effect on social media. Garimella et al.’s (2018) theory on echo chambers corroborates this. An echo chamber is a clever metaphor for a situation where people are only exposed to opinions that agree with their own. Taking the example of the hate speech suffered by Jo Cox, the belief that she was duly punished for her death for voting to stay in Europe and helping Syrian refugees, similar rhetoric is constantly being re-shared across the internet, and the social networks themselves form the chamber. It is therefore easy to see why social media companies are required to sign up to a duty of care and an obligation to regulate behaviour. It is dangerous to see hate speech as a natural part of free speech, which can have very real consequences and effects on communities.
Social media features can be used for propaganda and political manipulation, jeopardising democratic decision-making
The most remarkable feature of social media is that it has completed the construction of a new public sphere. For many people around the world, Facebook and Twitter are their primary civic forums, with users spending an average of 2.5 hours a day on social media sites. Furthermore, Ofcom’s research found that over half of the UK’s population is informed about the latest news through social media rather than other established and authoritative media outlets. As Smith & Niker, (2021) suggest in their study, social media companies facilitate citizens’ right to democratic epistemic participation as providers of the digital public sphere. In short, users have the opportunity and can express their voices and publicly declare their support for certain ideas on a relatively equal footing. This in itself is a good thing; it allows for a diversity of public discourse.
Nevertheless, it also means that democracies are deliberately and inadvertently entrusting a key function, democratic epistemic participation in the public sphere, to large commercial companies providing social media platforms (Smith & Niker, 2021). But the supporting monitoring mechanism has not even embryonic, relying almost entirely on the self-regulation of the platform. This has a very nasty effect on democratic politics. Social media companies can use this privilege to enrich themselves in a bottomless manner.
The Internet is connecting large communities of users and other market players in ways that were previously impossible, creating powerful feedback loops that we are familiar with as network effects. Digital platforms profit from the sale of advertising under this mechanism and have so far managed to generate trillions of dollars in wealth. For this reason, Twitter and Facebook have avoided censoring posts about conspiracy theories and fake news, and have allowed the proliferation of fake news related to elections, vaccines and other public health issues (Cusumano et al., 2021). These fake products and news are likely to involve digital political manipulation, manipulating the outcome of elections by influencing voters’ opinions.
Rigging digital content for political purposes – the hideous scandal of the US presidential election

The scandal that took place in the US Capitol on 6 January 2021 gave the world witness to the extremely pernicious effects that digital platforms can have on society. For it touched one of the most sensitive nerves in the politics of a democracy: the presidential election. This deplorable event began when supporters of Donald Trump incited voters through social media and tried to undermine the certification of the Electoral College vote (Cusumano et al., 2021).
Zachary Cohen and Marshall Cohen (2022) sort out the exact chain of events that led to this. In the weeks following the 2020 election, stakeholders of then-President Donald Trump submitted to the US National Archives certification documents regarding his winning seven states when in fact he lost. The U.S. Oversight Office obtained the fake certificates and accompanying emails, which were issued in mid-December 2020. Subsequently, on 6 January of the following year, the US National Commission reported a coup attempt by Trump, whose fraudulent operation during the election involved manipulating the will of the electorate. Taken together, the debacle that took place at the US Capitol illustrates how social media platforms can be a double-edged sword, widely used to spread disinformation, political advertising campaigns and even political manipulation (Reisach, 2021).
Mandatory “duty of care” compliance for social media companies to protect users – starting with service design
Why is it necessary to compel platforms to comply with the duty of care to their users? As I mentioned earlier, society and government have ceded the right to democratic epistemic participation in the public sphere to social media platforms. Yet platforms use tricks to exploit loopholes in legal concepts such as user consent clauses in an attempt to legitimise the manipulation of users (Woods & Perrin, 2021). In addition, self-regulatory behaviours adopted by the industry to address online harm have not been consistently applied. As a result, many governments have recognised the seriousness of the problem, called a halt to self-regulation by online companies, and have taken active action to introduce relevant cyber laws and establish regulatory bodies.
A more positive example is the UK’s Department for Digital, Culture, Media and Sport and Home Office (2019), which has submitted a joint proposal, the Online Harms White Paper. It is the first such law on online safety. It involves the introduction of a new independent regulator to ensure that social media companies fulfil their duty of care to their users in accordance with the law. If platforms fail to do so, they would face hefty fines. This will better force companies to be proactive and take reasonable steps to keep users safe. And address harmful and illegal activities on their services. For example, ensuring that the platform does not distribute any videos of child abuse, or statements associated with terrorist organisations. What’s more, it could improve the efficiency of platforms in responding to user complaints and acting quickly to resolve issues.
Alternatively, experience in many complex sectors suggests that it is more effective to give companies responsibility for the safe design of services, as well as process-oriented regulation to prevent harmful outcomes. The Woods & Perrin (2021) report suggests that regulation of social media platforms should focus on the design of the service, the business model, the features the platform offers to users, and the resources available to deal with user complaints. This is because they directly affect the flow of information across the platform. A more advanced security design framework could help platforms to incorporate ‘duty of care’ into their applications as part of their algorithmic logic. In this way, the dissemination of misleading and damaging information can be minimised at source.
Conclusion
Firstly, a social media platform is an amalgamation of software and business systems. It promises its users the right to free expression. Yet making profits by any means necessary is the underlying logic of commercial companies, which therefore allow the proliferation of expressions such as hate, discrimination, abuse and extremism. These messages, in turn, are reinforced by the echo chamber effect of social media, which has had very real consequences and effects on communities. Secondly, social media platforms constitute a whole new public sphere to which democratic epistemic participation is held in trust. This feature can be easily exploited by third-party organisations for advertising, propaganda and political manipulation, jeopardising democratic decision-making. An important conclusion is therefore deduced from the fact that authorities must oblige platforms to comply with the duty of care in order to protect users. One of the most efficient and resource-efficient ways to do this is to make changes to the design of the service. An advanced security design framework can reduce the spread of misinformation and harmful speech at source. The idea of imposing a duty of care on platforms is a long way off, but it is urgent.
Reference:
- Cohen, Z., & Cohen, M. (2022). Trump allies’ fake Electoral College certificates offer fresh insights about plot to overturn Biden’s victory. CNN. Retrieved 5 April 2022, from https://edition.cnn.com/2022/01/12/politics/trump-overturn-2020-election-fake-electoral-college/index.html
- Cusumano, M., Gawer, A., & Yoffie, D. (2021). Social Media Companies Should Self-Regulate. Now.. Harvard Business Review. Retrieved 5 April 2022, from https://hbr.org/2021/01/social-media-companies-should-self-regulate-now
- Department for Digital, Culture, Media and Sport and Home Office. (2019). UK to introduce world first online safety laws. https://www.gov.uk/government/news/uk-to-introduce-world-first-online-safety-laws
- Garimella, k., Morales, G., Gionis, A., & Mathioudakis, M. (2018). Political Discourse on Social Media: Echo Chambers, Gatekeepers, and the Price of Bipartisanship. Dl.acm.org. Retrieved 7 April 2022, from https://dl.acm.org/doi/fullHtml/10.1145/3178876.3186139
- House of Commons Home Affairs Committee. (2017). Hate Crime: Abuse, Hate, and Extremism Online. https://publications.parliament.uk/pa/cm201617/cmselect/cmhaff/609/60904.htm#_idTextAnchor005
- Normas comunitarias de Facebook | Centro de transparencia. Transparency.fb.com. (2022). Retrieved 8 April 2022, from https://transparency.fb.com/en-gb/policies/community-standards/
- Reisach, U. (2021). The responsibility of social media in times of societal and political manipulation. European Journal Of Operational Research, 291(3), 906-917. https://doi.org/10.1016/j.ejor.2020.09.020
- Research finds MP Jo Cox’s murder was followed by 50,000 tweets celebrating her death. Birmingham City University. (2016). Retrieved 7 April 2022, from https://www.bcu.ac.uk/news-events/news/research-finds-mp-jo-coxs-murder-was-followed-by-50000-tweets-celebrating-her-death
- Smith, L., & Niker, F. (2021). What Social Media Facilitates, Social Media should Regulate: Duties in the New Public Sphere. The Political Quarterly, 92(4), 613-620. https://doi.org/10.1111/1467-923x.13011
- Stone, K. (2021). Should social media platforms have a duty to care? | Techblog. Techblog.nz. Retrieved 5 April 2022, from https://techblog.nz/2707-Should-social-media-platforms-have-a-duty-to-care
- Woods, L., & Perrin, W. (2021). Obliging Platforms to Accept a Duty of Care. In Martin Moore and Damian Tambini (Eds.), Regulating Big Tech: Policy Responses to Digital Dominance (pp. 93–109). Oxford University Press.