Introduction
With technological changes and digital transformation, social media has gradually become the main news feed, playing an increasingly important role in citizens’ news consumption (Bechmann & Nielbo, 2018; Bruns, 2019). According to the PEW research centre report, social media has become the primary news source in America (Seargeant & Tagg, 2019; Spohr, 2017). 62% of American adults get news from social media (Gottfried & Shearer, 2016). Due to the low cost and high speed of online news generating and dissemination, news on the Internet has increased exponentially (Flaxman, Goel & Rao, 2016). When browsing the news online, users are faced with a flood of information in a short time. However, because the brain’s information processing ability is biologically limited, our ability to process news has not improved, so the emergence of a personalised recommendation system becomes necessary (Bozdag, 2013). However, there is a growing concern about the filter bubble as the by-product of personalisation, which erodes users’ autonomy and leads to opinion polarisation. This post critically examined the potential risk of the filter bubble in the case of Trump’s election in 2016.
Personalisation systems and the filter bubble
There are two primary operating modes of the personalised system: implicit personalisation and explicit personalisation (Zuiderveen Borgesius et al., 2016; Haim, Graefe & Brosius, 2017). Explicit personalisation, also known as self-selected personalisation, requires users to provide their online preferences deliberately and proactively. When conducting self-selection, user behaviour shows a tendency for selective exposure. They seek information that conforms to their existing beliefs and attitudes while avoiding news that challenges or opposes their views (Dahlgren, 2021). For example, a news consumer who holds a pro-Ukraine position might tend to avoid supporting the stories of Russia’s attack on Ukraine and would rather hear good news about Ukraine.
Implicit personalisation refers to the pre-selected personalisation by the algorithm. Digital platforms automatically filter the content to provide recommendations according to the observations of an individual user’s online behaviour, such as browsing preferences, previous click choices, social networking and contextual information (location or computer brand) (Helberger, Karppinen & D’Acunto, 2016; Haim, Graefe & Brosius, 2017). After pre-selecting, the results ultimately presented to a specific user are tailored by the algorithms according to the individual’s unique online history. Platforms claim that the algorithm personalisation aims to improve users’ online experience, overcome the overwhelming amount of available information, and provide more supportive content accommodating to their needs(Haim, Graefe & Brosius, 2017; Dahlgren, 2021).
In a word, the algorithm’s pre-selection examines users’ past click habits to predict their future choices. This personalisation system has its own strength regarding time-saving, simplicity, and relevant recommendations. Students whose deadline is approaching may prefer the most relevant peer-reviewed articles to tens of thousands of low-quality readings materials. Girls who like shopping would not refuse the latest promotional information of fashion brands.
For music lovers, it is ideal to listen to music that suits their own tastes. However, the algorithm keeps recommending repeated music, depriving them of the opportunity to contact other genres of music and isolating them in their own musical world. As a result of personalised recommendations, every electronic product or digital platform user becomes an individual enclosed in his mobile phone or computer. We do not know what the people next to us are listening to with headphones on, and we cannot tell how the news they get is different from what we read on our screens. Everyone is immersed in the filter bubbles created by the algorithm for them.

(Source: immediatefuture.co.uk)
Eli Pariser coins the term filter bubble. In his TED speech, Pariser (2011) argues that “as web companies strive to tailor their services (including news and search results) to our personal tastes, there’s a dangerous unintended consequence: we get trapped in a filter bubble and do not get exposed to information that could challenge or broaden our worldview”. He insists that the filter bubble will ultimately prove detrimental to us and democracy.

(Source: azquotes)
The filter bubble is bad for users
Filter bubble has adverse effects on users. Firstly, users do not choose to enter the filter themselves (Pariser, 2011; Dahlgren, 2021); it is what the platform imposes on them. The algorithm makes choices for you without asking permission, which places us in the passive users’ positions and deprives our autonomy of actively selecting information on our initiative (Kaluža, 2021). In addition, the filter bubble leads to information determinism, in which our previous browsing choices determine our future information display (Dahlgren, 2021). However, the past click choices cannot be equated with future preferences, as we cannot exclude the factors of accidental or wrong clicks.
Moreover, the filter bubble is invisible and how the algorithm works is opaque(Pariser, 2011). Not many users realise the filter bubble’s existence, let alone understand how the information is filtered out (Dahlgren, 2021). According to Kaluža (2021), there is non-symmetrical surveillance between the platform and users. Algorithms secretly track and record users’ online behaviour, whereas users hardly know how the algorithm works. As Pariser asserted: “You do not decide what gets in, you do not see what’s getted out” (2011).
Thirdly, in contemporary media society, search engines, social media and various digital platforms have replaced traditional gatekeepers such as editors and journalists and become the new gatekeepers of online information (Zuiderveen Borgesius et al., 2016; Krafft, Gamer & Zweig, 2019). However, platforms tend to avoid offering information that challenges their users to attract attention. Algorithms weigh relevance over diversity when recommending content to users, leading to an imbalanced news diet and limiting users’ information diversity (Kitchens, Johnson & Gray, 2020). Platforms keep recommending like-minded people and popping up ‘monothematic’ and repetitive content, confirming users’ existing interests and beliefs (Helberger, Karppinen & D’Acunto, 2016). Therefore, users are trapped in a self-reinforced spiral where they encounter fewer heterogeneous groups, become less informed of alternative viewpoints, and are narrow-minded (Dahlgren, 2021). Compared with traditional gatekeepers such as editors and journalists, algorithms lack human deliberation and professional ethics. Due to the lack of a content review mechanism, it is easy to cause the spread of fake news (Kaluža, 2021).
Another important point we cannot ignore is that the logic behind Filter Bubble is capitalism. Although the platform claims personalisation is to provide better services to users, it is actually hiding the nature of maximising economic benefits under the guise of neutrality (Kaluža, 2021). Digital companies are companies, and personalisation is based on a bargain (Pariser, 2011). They can not be linked to neutral or the public interest. Algorithms are not just influenced by the people who design and operate them (Bozdag, 2013). The delivery of personalised content is also driven by increasing revenue from targeted advertising invested by third parties (Brenders, 2017; Chitra& Musco, 2020; Vilela, Pereira, Dias, Stanley & da Silva, 2021). In this case, users do not see what they want to see; they see what the platform wants them to see for monetisation.

(Source: sticky.digital)
The filter bubble is bad for democracy
Filter bubbles also cause some societal consequences. Liberal democracy upholds the values of freedom of choice and self-determination (Bozdag & van den Hoven, 2015). A well-functioning democracy depends on voters who have sufficient encounters with various political views to actively engage in the public discussion and have the autonomy to make individual choices (Flaxman, Goel & Rao, 2016; Brenders, 2017). However, the filter bubble threatens liberal democracy. The filter bubble is enforced on users without permitting. The algorithm becomes the judge of users’ interests. It makes decisions for them (Bozdag & van den Hoven, 2015), which diminishes users’ control and violates their autonomy to freely choose what information they want to see (Helberger, Karppinen & D’Acunto, 2016). What’s more, the filter bubble only presents content that satisfies the user’s own interest, excluding all opposite points of view. The limited exposure makes voters ill-informed about political information from opposing parties and impedes the exchange of conflicting viewpoints necessary for a functioning civic discourse, thus failing to make wise political decisions(Brenders, 2017; Rinehart, 2017).
The algorithm constantly and inadvertently amplifies ideological segregation(Bozdag, 2013; Flaxman, Goel & Rao, 2016), and more and more users have been isolated in their personalised filter bubble. Some fear that this bubble will cause opinion polarisation or cyberbalkanisation. Cyberbalkanisation refers to “the idea of segregation of the Internet into small political groups with similar perspectives to the degree that they show a narrow-minded approach to those with contradictory views” (Bozdag & van den Hoven, 2015). The biggest concern is that the filter bubble will lead to unexpected political change(Brenders, 2017).
Filter bubble and Trump’s victory

(Source: Baer)
Are you shocked that Donald Trump won the 2016 presidential election? Well, you are not the only one who does not believe this. Many people believe that social media and the filter bubble help Trump’s unexpected win. By constantly strengthening users’ preferences, filter bubbles repeatedly display information of the same political views, which not only distort and polarise users’ political views, but also affect their electoral decisions in real life (Aden, 2016). Groshek and Koc-Michalska (2017) believe that Trump won by “cultivating ideological filter bubbles that lacked cross-cutting information”. During the campaign, Trump hired Cambridge Analytica to use data and construct the psychological, demographic and geographical profiles of voters and then launched a large-scale campaign on social media platforms to deliver targeted political information to users (Napoli, 2018). The filter bubble not only prevents Trump’s supporters from contacting different views of opposing candidates but also consolidates their pre-existing world views.
In addition, the survey by Synergia foundation(2020) shows that the filter bubble has a positive impact on swing state and neutral voters in Trump’s election results. Trump posts provocative information and even fake news attacking opponents on social media. The filter bubble infiltrated into swing voters, which influenced voters’ ability to make informed choices and helped him win the support of most undecided or wavering voters in Michigan, Pennsylvania, and Wisconsin. Based on the platform’s own biased characteristics and the motivation to increase revenue through clicks, algorithm personalisation constitution is a major contributing factor to political polarisation and Trump’s victory (Marozzo& Bessi, 2017). According to the research of Bryant(2020), YouTube has a solid right-leaning political tendency, exposing its users to a large number of alt-right videos. It gives the audience the illusion that what they see online is what happens in reality, thus ignoring the existence of other political views, leading to partisan polarisation.
In response to the concern, scholars and computer scientists developed tools to counter filter bubbles from different perspectives(Bozdag & Van den Hoven, 2015). Bozdag (2013) emphasised the design of morally good personalisation technology to reduce technical biases caused by third-party manipulation, personal judgment and organisational factor. Helberger, Karppinen and D’ Acunto (2016) proposed a diversity-sensitive design approach.
Different voices of the filter bubble
Some views are at odds with the concern of the filter bubble; they think that the filter bubble’s harm is overestimated and its impact is tiny in reality. After examining the 1000 Facebook users’ dataset, Bechmann and Nielbo (2018) find that less than 10% of the link sources are in a filter bubble. In this respect, the filter bubble’s risk may be overestimated. Similarly, Kitchens, Johnson and Gray (2020) also suggest that there is no evidence that the use of social media limits the sources and diversity of news consumption. Flaxman, Goel and Rao (2016) believe that compared with meeting entertainment needs, the proportion of users who use social media to watch the news is very small, and the click-through rate is relatively low. The extent to which social media has changed people’s news consumption is not that strong. Thus, the filter bubble’s overall impact on opinion segregation is relatively modest.
Haim, Graefe and Brosius (2017) found that the filter bubble hypothesis is not valid except for the limited effects of implicit personalisation on content diversity. The concern about the filter bubble caused by the pre-selective personalisation of online news may be exaggerated. Seargeant and Tagg’s (2019) study affirmed that the algorithm was responsible for generating filter bubbles, but it was only one side of the story. Site users also create conditions for creating their opinion-ghetto on social media. Other studies show that although the algorithm produces a filter bubble, the user’s individual choice has a more significant influence on the production of opinion polarisation. Opinion polarisation is rooted in the user’s psychology and habitual adaptation(Spohr, 2017; Kaluža, 2021). Compared with filter bubbles caused by algorithms, individual choice plays a more substantial role in avoiding conflicting content and reducing the visibility of alternative content.
Conclusion
The idea that algorithm pre-selective personalisation produces a filter bubble causes polarisation of opinion and even harms liberal democracy. There are compelling arguments on both sides of the debate. The purpose of this post is not to over boast the threat of filter bubbles and trigger panic among internet users. It is also not to deny the existence of the filter bubble nor to defend or find an excuse. Based on the case study of Trump’s election and backed up by the above analysis, I argue that the potential risk and threat of filter bubble and opinion polarisation is real. Although there is a lack of empirical evidence to prove that algorithm personalisation is the primary cause of opinion polarisation, and personalisation technology is still at an infant stage (Kaluža, 2021), concern about the filter bubble is still reasonable and necessary. With the technological development and the popularity of online news consumption, users need to become aware of the filter bubble and be alert to the contents tailored by the algorithm. Try to burst the self-isolating bubble from inside. Users can actively expand their news source, maximise their information diversity, and encounter views from opposite points with an open mind.

(Source: medium)
Reference
Adee, S. (2016). Burst the filter bubble. New Scientist (1971), 232(3101), 24–25. https://doi.org/10.1016/S0262-4079(16)32182-0
Baer, D. (2016). The ‘Filter Bubble’ Explains Why Trump Won and You Didn’t See It Coming. Retrieved 5 April 2022, from https://www.thecut.com/2016/11/how-facebook-and-the-filter-bubble-pushed-trump-to-victory.html
Bakshy, E., Messing, S., & Adamic, L. (2015). Exposure to ideologically diverse news and opinion on Facebook. Science, 348(6239), 1130-1132. doi: 10.1126/science.aaa1160
Bechmann, A., & Nielbo, K. (2018). Are We Exposed to the Same “News” in the News Feed?. Digital Journalism, 6(8), 990-1002. doi: 10.1080/21670811.2018.1510741
Bozdag, E. (2013). Bias in algorithmic filtering and personalization. Ethics and Information Technology, 15(3), 209–227. https://doi.org/10.1007/s10676-013-9321-6
Bozdag, E., & van den Hoven, J. (2015). Breaking the filter bubble: democracy and design. Ethics And Information Technology, 17(4), 249-265. doi: 10.1007/s10676-015-9380-y
Brenders, M. (2017). Filter Bubbles vs. Democracy in the Age of Social Media. Retrieved from https://medium.com/filter-bubbles-vs-democracy-in-the-age-of-social/filter-bubbles-vs-democracy-5b0e4fae6837
Bruns, A. (2019). Filter bubble. Internet Policy Review, 8(4). doi: 10.14763/2019.4.1426
Bryant, L. (2020). The YouTube Algorithm and the Alt-Right Filter Bubble. Open Information Science, 4(1), 85-90. doi: 10.1515/opis-2020-0007
Chitra, U., & Musco, C. (2020). Analyzing the Impact of Filter Bubbles on Social Network Polarization. Proceedings of the 13th International Conference on Web Search and Data Mining, 115–123. ACM. https://doi.org/10.1145/3336191.3371825
Dahlgren, P. (2021). A critical review of filter bubbles and a comparison with selective exposure. Nordicom Review, 42(1), 15-33. doi: 10.2478/nor-2021-0002
Danger of living in a social media filter bubble | Sticky. (2022). Retrieved 5 April 2022, from https://www.sticky.digital/danger-of-living-in-a-filter-bubble/
Eli Pariser Quote. (2022). Retrieved 5 April 2022, from https://www.azquotes.com/quote/742450
Faust, M. (2019). Does the democratic West ‘learn’ from the authoritarian East? Juxtaposing German and Chinese Internet censorship and filter bubbles. East Asian Journal Of Popular Culture, 5(1), 55-78. doi: 10.1386/eapc.5.1.55_1
Filter Bubbles vs. Democracy in the Age of Social Media. (2018). Retrieved 5 April 2022, from https://medium.com/filter-bubbles-vs-democracy-in-the-age-of-social/filter-bubbles-vs-democracy-5b0e4fae6837
Flaxman, S., Goel, S., & Rao, J. (2016). Filter Bubbles, Echo Chambers, and Online News Consumption. Public Opinion Quarterly, 80(S1), 298-320. doi: 10.1093/poq/nfw006
Floating In Between: The Swing Voter’s Filter Bubble. (2020). Retrieved 5 April 2022, from https://www.synergiafoundation.org/insights/analyses-assessments/floating-between-swing-voter-s-filter-bubble
Geschke, D., Lorenz, J., & Holtz, P. (2019). The triple‐filter bubble: Using agent‐based modelling to test a meta‐theoretical framework for the emergence of filter bubbles and echo chambers. British Journal of Social Psychology, 58(1), 129–149. https://doi.org/10.1111/bjso.12286
GOTTFRIED, J., & SHEARER, E. (2016). News Use Across Social Media Platforms 2016. Retrieved 5 April 2022, from https://www.pewresearch.org/journalism/2016/05/26/news-use-across-social-media-platforms-2016/
Groshek, J., & Koc-Michalska, K. (2017). Helping populism win? Social media use, filter bubbles, and support for populist presidential candidates in the 2016 US election campaign. Information, Communication &Amp; Society, 20(9), 1389-1407. doi: 10.1080/1369118x.2017.1329334
Haim, M., Graefe, A., & Brosius, H. (2017). Burst of the Filter Bubble?. Digital Journalism, 6(3), 330-343. doi: 10.1080/21670811.2017.1338145
Helberger, N., Karppinen, K., & D’Acunto, L. (2016). Exposure diversity as a design principle for recommender systems. Information, Communication &Amp; Society, 21(2), 191-207. doi: 10.1080/1369118x.2016.1271900
Kaluža, J. (2021). Habitual Generation of Filter Bubbles: Why is Algorithmic Personalisation Problematic for the Democratic Public Sphere? Javnost (Ljubljana, Slovenia), 1–17. https://doi.org/10.1080/13183222.2021.2003052
Keaton Boyle. (2019). Fake News and Filter Bubbles: Rethinking Counterspeech in the Age of Social Media. Chicago Policy Review (Online).
Kiszl, P., & Fodor, J. (2018). The “Collage Effect” – Against Filter Bubbles: Interdisciplinary Approaches to Combating the Pitfalls of Information Technology. The Journal Of Academic Librarianship, 44(6), 753-761. doi: 10.1016/j.acalib.2018.09.020
Kitchens, B., Johnson, S., & Gray, P. (2020). Understanding Echo Chambers and Filter Bubbles: The Impact of Social Media on Diversification and Partisan Shifts in News Consumption. MIS Quarterly, 44(4), 1619-1649. doi: 10.25300/misq/2020/16371
Krafft, T., Gamer, M., & Zweig, K. (2019). What did you see? A study to measure personalization in Google’s search engine. EPJ Data Science, 8(1). doi: 10.1140/epjds/s13688-019-0217-5
Makhortykh, M., & Wijermars, M. (2021). Can Filter Bubbles Protect Information Freedom? Discussions of Algorithmic News Recommenders in Eastern Europe. Digital Journalism, 1-25. doi: 10.1080/21670811.2021.1970601
Marozzo, F., & Bessi, A. (2017). Analyzing polarization of social media users and news sites during political campaigns. Social Network Analysis and Mining, 8(1), 1–13. https://doi.org/10.1007/s13278-017-0479-5
Napoli, P. M. (2018). What If More Speech Is No Longer the Solution? First Amendment Theory Meets Fake News and the Filter Bubble. Federal Communications Law Journal, 70(1), 55–0_7.
Pariser, E. (2011). Beware online “filter bubbles.” Retrieved 5 April 2022, from https://www.ted.com/talks/eli_pariser_beware_online_filter_bubbles?language=en
Rinehart, W. (2017). The Election of 2016 and the Filter Bubble Thesis in 2017. Retrieved 5 April 2022, from https://medium.com/@willrinehart/the-election-of-2016-and-the-filter-bubble-thesis-in-2017-51cd7520ed7
Seargeant, P., & Tagg, C. (2019). Social media and the future of open debate: A user-oriented approach to Facebook’s filter bubble conundrum. Discourse, Context & Media, 27, 41-48. doi: 10.1016/j.dcm.2018.03.005
Sindermann, C., Elhai, J., Moshagen, M., & Montag, C. (2020). Age, gender, personality, ideological attitudes and individual differences in a person’s news spectrum: how many and who might be prone to “filter bubbles” and “echo chambers” online?. Heliyon, 6(1), e03214. doi: 10.1016/j.heliyon.2020.e03214
Spohr, D. (2017). Fake news and ideological polarization. Business Information Review, 34(3), 150-160. doi: 10.1177/0266382117722446
Synergia foundation. (2020). Retrieved 5 April 2022, from https://www.synergiafoundation.org/insights/analyses-assessments/floating-between-swing-voter-s-filter-bubble
Vilela, A., Pereira, L., Dias, L., Stanley, H., & da Silva, L. (2021). Majority-vote model with limited visibility: An investigation into filter bubbles. Physica A: Statistical Mechanics And Its Applications, 563, 125450. doi: 10.1016/j.physa.2020.125450
What-should-we-think-of-filter-bubbles-on-social-media. (2022). Retrieved 5 April 2022, from https://immediatefuture.co.uk/blog/what-should-we-think-of-filter-bubbles-on-social-media/
Zakaria, T., Busro, B., & Furqon, S. (2018). Filter bubble effect and religiosity: filter bubble effect implication in the formation of subjects and views of religiosity. IOP Conference Series: Materials Science And Engineering, 434, 012280. doi: 10.1088/1757-899x/434/1/012280
Zuiderveen Borgesius, F., Trilling, D., Möller, J., Bodó, B., de Vreese, C., & Helberger, N. (2016). Should we worry about filter bubbles?. Internet Policy Review, 5(1). doi: 10.14763/2016.1.401