Filter bubbles Lead to Polarization of Opinions

Introduction

Contemporary access to information and technologies simplify people’s intellectual development providing them with enormous volumes of data accessible through various digital means. It is easy to open Google and search for answers to almost every question. Internet users have 24/7 access to relentless news and a diversity of ideas on their social media. As a result, people consider they are smart enough to analyze information and form their own opinion. However, what if the facts appearing in a person’s news feed are not random, but administered by algorithmic filtering? No one would like to accept that somebody shapes their opinion on purpose. In reality, social media connect people with the ideas they are most seemingly would agree (Chitra & Musco, 2020). The design of social media allows people to express their identities in a creative way (Carlson & Frazer, 2018). Hence, users publicly broadcast information about their lives on social media, posting their thoughts, photos, and reposting and liking posts of other people. The algorithm collects this information to provide content that would align with the interests of the user creating filter bubbles. Filter bubbles are dangerous for society and the nation as opinion polarization may be abused by the interested parties.

Social Media and News

An increase in digitalization strengthens the relationship between news and social media. There are many news sources online, including newspapers and magazine websites. However, people spend a limited amount of time searching for news deliberately on a specific website. The empirical study revealed that only 3-6% spend screen time on news media (Arguedas, Robertson, Fletcher, & Nielsen, 2022). Also, the survey in the UK revealed that half of the respondents searched for news on websites, however, only once during the week before. The rest of the respondents rely on the newsfeed of social media. There are objective reasons why such a tendency is present nowadays – firstly, the amount of fresh and relevant information provided by social media platforms is abundant and secondly, it is much easier to access news on social media than apply efforts to find a certain website with news.

There are also important statistics on social media behavior in terms of access to news. On average, people spend on social media platforms most of their screen time on their gadgets. The most popular digital platforms are Google and Facebook groups, which also involve YouTube, Instagram, and WhatsApp (Arguedas, Robertson, Fletcher, & Nielsen, 2022). Each of the platforms may provide fresh updates on what is going on in the world. However, only 22% seek news on many different digital platforms on purpose. 55% tend to read news briefly from a few different sources, while the rest 23% do not follow the news every day (Arguedas, Robertson, Fletcher, & Nielsen, 2022). Hence, it is possible to conclude that most people rely on news provided by the algorithm of social media platforms and do not apply efforts to analyze a broad spectrum of information from different sources.

Algorithms

Contemporary digital sources have more information than people can perceive, hence, people absorb information through the selection, filtering, and sorting. When a person enters a request on Google they receive hundreds, thousands, or even millions of results. The likelihood that the person goes farther than the first page is critically low (Andrejevic, 2019). Algorithms involve instruments that connect users with information that users are likely to be interested in (Spohr, 2017). Such instruments analyze the previous online experience of the user and make some content easier to access than other items. As a result, the user gets a customized stream of content. On the one hand, Internet algorithms are problem-solving instruments as they allow the processing of huge volumes of information customizing it according to the person’s social media behavior (Just & Latzer, 2016). On the other hand, the algorithms analyze and process the online experience and private data of users. The most dangerous consequence of algorithmic content processing is formation of filter bubbles.

Filter Bubbles

Filter bubbles or echo chambers – are virtual informational spaces for people who share similar beliefs and views. The term filter bubble was firstly presented by Eli Pariser and involves a “personal, unique universe of information of a one user.” (Gould, 2019). Filter bubbles may appear on different digital platforms, including messengers, news aggregators, social media, search engines, etc. (Arguedas, Robertson, Fletcher, & Nielsen, 2022). Such platforms as Facebook, Twitter, Google, and YouTube use algorithms that are top-secret and continuously changing to create filter bubbles. Some of the data people provide consciously, but people may even not know that some of the data are collected. These algorithms are complicated and the average user cannot even recognize that they consume filtered information.

Figure 1. Two ways of receiving supporting information (Dahlgren, 2021)

Even though a person’s behavior on social media contributes to the creation of filter bubbles, it is not the user’s fault. There are two different causes of filter bubbles (Gould, 2019). The first is managed by a user, while the second is not under the user’s control (Figure 1). The first type of filter bubble appears because people tend to surround themselves with friends and acquaintances who share similar interests and ideas. Psychologists say that people tend to avoid psychological discomfort and they prefer not to read the information they do not like (Gould, 2019). Moreover, social media allow deleting friends whose ideas and posts on Facebook irritate a user. People tend to accept supporting information and avoid conflicting opinions (Dahlgren, 2021). In addition, processing and analyzing information is an effort-demanding process, hence, people tend to save their energy on new knowledge processing. As a result, people create their filter bubbles themselves. However, there are also situations when the formation of a filter bubble is uncontrollable. In this case, social media algorithms analyze, sort, and filter the exposed information. The algorithm depends on what a user does on social media, however, the person does not decide what information gets in the bubble and what is filtered. This type is called a technological filter bubble.

The motivation of social media platforms to correct users’ content is apparent – they attempt to increase user engagement and as a result ad revenue. The likelihood of a person’s loyalty to a certain social media is higher when the information provided by the platform aligns with the user’s ideas, interests, and views (Chitra & Musco, 2020). People feel discomfort seeing conflicting views in their newsfeeds (Gould, 2019). The algorithms may propose direct recommendations to a user, for example, friends, or follow suggestions on social media platforms, which are common for Facebook, Instagram, and Twitter. Also, the recommendations may be indirect, like the news feed displayed not chronologically, but filtered and sorted individually for a certain user. In general, the task of network administration is to minimize the level of disagreement of the user regarding the content exposed to them.

The creation of filter bubbles is widely criticized nowadays. Tim Cook the CEO of Apple recently highlighted the danger of creating filter bubbles for society and even the nation. Indeed, such a tendency leads to separation of society and further disagreements based on different issues (Chitra & Musco, 2020). Algorithmic targeting is often associated with polarizing, controversial, and fake information. Conspiracy theories are also often distributed through social media algorithms in the filter bubbles that are likely to accept this information (Andrejevic, 2019). The filter bubbles are relatively homogenous which causes the appearance of radically different opinions and what psychologists call “the psychology of the tribe” (Spohr, 2017). Opinion polarization is another consequence of filter bubbles as causes loss of ideas and opinion diversity as algorithms increase the weight of only two polar views.

Opinion Polarization

The paradox of social media and the world wide web lies in its contribution to opinion polarization, while there is unlimited access to the diversity of ideas. Opinion polarization is based on the selective exposure of the ideas, which align with a person’s ones. People’s opinions may be polarized based on many topics, but the most common ones are politics, science, and healthcare. The key downside and threat of opinion polarization are that people cannot reach a consensus regarding their ideas as they have radical opinions strengthened by their filter bubble. This situation is commonly used by politicians and other interested parties. The US’ 2016 presidential elections and Brexit are good examples of how selective exposure contributes to the polarization of ideas and influences political debates (Chitra & Musco, 2020). Such filtering and sorting of content leads to the appearance and spread of fake news, the active distribution of which took place also during the Brexit referendum and the US presidential elections. The UK studies revealed that from six to eight percent of the public belong to political news filter bubbles promoting ideas of a certain political power (Arguedas, Robertson, Fletcher, & Nielsen, 2022). Hence, the threat of filter bubbles’ existence goes beyond the social media environment.

The process of opinion polarization is exposed in figure 2. The model proposed by Chitra and Musco (2020) involves the functioning of network administration that strengthens the edges of polarized opinions. The blue dots and red dots are two polar opinions on the same issue. The weight of each edge represents the strength of the social connection between people sharing the same opinion. Suppose there is a network administration that has the power to strengthen the opinion. Network administration is responsible for correcting content to connect people with ideas close to their own opinion. Filtering content based on the history of the person’s activity on social media allows for strengthening the person’s opinion. The informational algorithm works in such a way that each view on the same situation is reinforced by the content people consume. As a result, filter bubbles on social media appear and polarization of the opinions grows.

Figure 2. Opinion polarization model (Chitra & Musco, 2020)

Notably, there are no similar patterns of opinion polarization in different countries and social media platforms. The statistics say that the US society is highly polarized, while news audience polarization in European countries is much lower (Arguedas, Robertson, Fletcher, & Nielsen, 2022). Polarization also depends on the algorithms of a certain media. For example, the study on the social-news platform Reddit revealed that the formation of filter bubbles and polarization of opinions caused anti-feminist activism (Massanari, 2017)). Such a tendency is called toxic techno culture, which is based on algorithms, media policies, and platform design. Facebook polarizes political opinion, while Instagram causes debates over social issues. Notably, each social media platform has its own content policies according to which commercial content moderators filter the content (Roberts, 2019). Regardless of the media and country, the polarization of opinion does not bring fruitful results for social development and harmony.

However, there are opportunities to achieve the positive effect of social media content administration. Indeed, as opinions can be polarized, they can be also depolarized. The first step to mitigate the negative effect of the filter bubble is understanding the problem by society and civic will to resolve it (Andrejevic, 2019). The algorithms of social media content adjustment can also reduce polarization if properly set up. Moreover, the reduction of opinion polarization can also align with the business interests of social media platforms in terms of user engagement and ad revenue (Chitra & Musco, 2020). The diversity of opinions allows for generating a spectrum of views, which is the basement of a healthy democratic society.

Conclusion

Filter bubbles lead to opinion polarization because internet algorithms feed users with content that only strengthens their views, reducing the number of alternative ideas. Even though contemporary digital means ensure easy access to large amounts of information, there is a tendency for filter bubbles appearance. In filter bubbles, people gravitate around opinions and ideas similar to their own. As a result, user engagement increases, and social media receive higher ad revenue. However, while the formation of filter bubbles is beneficial for the business interests of social media platforms, it is extremely harmful to society and nations. Filter bubbles lead to polarization of opinions and people cannot find common ground for their views. Some politicians may abuse this tendency in their own interests and it commonly takes place nowadays in different countries. The experience of the UK and the USA exposes that algorithms that define filter bubbles of people with opposite views may be used for the distribution of fake news, provocations, and disinformation, which further allows interested parties to achieve their political or social goals. Still, despite the negative consequences of opinion polarization, experts tend to think optimistically and consider that algorithms may depolarize opinions providing more alternatives without harm to social media business interests.

 

References

1. Andrejevic, M. (2019). Automated culture. In Automated Media (pp. 44-72). London: Routledge.

2. Arguedas, A.R., Robertson, C.T., Fletcher, R., & Nielsen, R. K. (2022, January). Echo chambers, filter bubbles, and polarisation: a literature review. Reuters Institute. https://reutersinstitute.politics.ox.ac.uk/sites/default/files/2022-01/Echo_Chambers_Filter_Bubbles_and_Polarisation_A_Literature_Review.pdf

3. Carlson, B. & Frazer, R. (2018). Social media mob: Being indigenous online. Sydney: Macquarie University. https://researchers.mq.edu.au/en/publications/social-media-mob-being-indigenous-online

4. Chitra, U. & Musco, C. (2020). Analyzing the impact of filter bubbles on social network polarization. The Thirteenth ACM International Conference on Web Search and Data Mining. https://doi.org/10.1145/3336191.3371825

5. Dahlgren, P. M. (2021). A critical review of filter bubbles and a comparison with selective expo- sure. Nordicom Review, 42(1), 15–33. https://doi.org/10.2478/nor-2021-0002

6. Just, N. & Latzer, M. (2019). Governance by algorithms: reality construction by algorithmic selection on the Internet. Media, Culture & Society 39(2), pp. 238-258. https://doi.org/10.1177/0163443716643157

7. Gould, W.R. (2019, October 21). Are you in a social media bubble? Here’s how to tell. NBC News. https://www.nbcnews.com/better/lifestyle/problem-social-media-reinforcement-bubbles-what-you-can-do-about-ncna1063896

8. Massanari, A. (2017). #Gamergate and the Fappening: How Reddit’s algorithm, governance, and culture support toxic technocultures. New Media & Society, 19(3), 329–346. https://doi.org/10.1177/1461444815608807

9. Roberts, S. T. (2019). Behind the screen: Content moderation in the shadows of social media. Yale University Press.

10. Spohr, D. (2017). Fake news and ideological polarization: Filter bubbles and selective exposure on social media. Business Information Review, 34(3), 150–160. https://doi.org/10.1177/0266382117722446