The Discrimination and Bias of Algorithm is Affecting Our Lives

Introduction

Algorithms are entering our lives on a large scale and on a large scale. For companies such as TikTok or Amazon, they are using technology to analyze their customers, even those who just browse the software rather than really want to use it. We are often surprised to find that after using these software, it seems to fit our hearts very much. Basketball fans can always see videos related to stars on TikTok, even if they don’t actively search. A pet lover will find the recommendation of cat litter on the page after buying the cat litter basin. The impact of intelligent algorithms on ordinary users is mainly realized through personalized push. This is certainly a good thing. After all, the potential needs and hobbies of users are presented as much as possible on a small screen. However, in this process, the algorithm deviation and discrimination hidden behind the behavior are difficult to be recognized by users.

 

Biases and Discrimination in Our Lives

We may all have experienced algorithmic discrimination, but its real impact on life may feel that its definition is vague. For example, in some online shopping or taxi software, users who often use or often choose additional services will encounter higher prices in their subsequent purchase experience. This phenomenon is called big data-enabled price discrimination (Qian, 2021). This kind of price discrimination is a common algorithm bias in life. However, the definition of algorithmic discrimination may be more complex and extensive. The real algorithm bias means that the algorithm program loses its objective and neutral position in the process of information production and distribution, which affects the public’s objective and comprehensive cognition of information (Kitchih, 2017).

 

Even, there are more profound facts here. At present, advertisers settled on Amazon or TikTok are deliberately avoiding classifying the target audience with individual sensitive characteristics, such as gender and race. Instead, they divide users into different groups with seemingly neutral characteristics, and provide different products, prices and services based on the category of people. Of course, based on the protection rules of information collection data, this behavior of collecting users’ personal information or preferences is out of bounds and not allowed. But how do we recognize it? On the one hand, all this is going on where we can’t see. On the other hand, there may be some puzzling fuzzy areas about whether the personal information publicly displayed on social networking sites, or personal interests and hobbies, or even the information retained by the personalized choices launched by the software, belongs to personal privacy or their own choices to obtain better services. However, Wachter, mittelstadt & Florida (2017) believe that if only from the perspective of results, the algorithm discrimination caused by the division of people with neutral characteristics exists objectively. And this discrimination means that even the seemingly harmless neutral information is at risk of abuse. For example, companies represented by Amazon ostensibly adopt restrictions on the type of data collection, but this does not necessarily lead to the reduction of algorithm discrimination. On the contrary, this discrimination may exist in a more hidden form, such as gently recommending products they think we will like (Pasquale, 2015).

Where Does It Come From?

If an algorithm can be applied in the real world, it needs to go through many links, including definition logic, input data, self-learning and training. Therefore, in the whole link, the designer’s own value judgment, the social tendency implied in the data and other factors run through the whole process of the algorithm.

The first source of bias and discrimination is the algorithm bias in the design of operation rules. When studying the spread of audience preferences through big data analysis, we need to write certain algorithms. In this process, we need to preset certain standards to classify information types or audience preferences. However, the preset classification label is often not an objective measure, but a concept of social construction, which reflects human subjectivity to a certain extent.

 

Among many algorithm biases and discrimination, the use of algorithm rules and related data sets may be the most common or easy to understand. One case is about the discrimination results accumulated by the rules of the algorithm. In one project, mscoco selected and investigated image training data sets supported by large companies such as Microsoft and Facebook, and found that some specific features are very easy to appear in specific images. Those who take care of children in the kitchen will be identified as women, while those who swing their four limbs in a range similar to that of sports will be identified as men. These collected photos actually do not have the above characteristics, but the algorithm draws such a discriminatory conclusion on these photos based on its own trained cumulative results. (Schroeder, 2021). The second comes from the correlation between the algorithm and ourselves. As users of the algorithm, they are affecting the application of the algorithm, but the excellent feedback content given by the algorithm makes them influenced by their own logic. (Vedder & Naudts, 2017). This is because in the era of traditional media dominating news communication, it is mainly the editors who control news production and communication, and now they are gradually giving way to the designers of algorithms. The rise of mobile phones and the media carried by smart phones have been completely applied to a completely different set of business logic and technical Enlightenment (Andrejevic, 2019). However, these programmers or algorithm engineers involved in specific work may have more energy and professionalism in favor of technology and algorithm, rather than evaluating whether there is subjective bias within the logic of the algorithm itself, and it is difficult to review the fairness and ethical standards in the application process in the subsequent specific use process. Of course, objectively speaking, it is also difficult for them to achieve this. After all, most professionals develop algorithms or execute designs according to the instructions of managers or product managers, but the specific business logic and the capital attributes derived from their algorithms cannot be predicted without supervision.

 

The second source of bias and discrimination is the algorithm bias in the operation process. The operation of the algorithm is to input data according to the established program, interpret the data according to the calculation rules, and finally output the operation results. On the surface, the operation process of the algorithm is unlikely to be biased, but in fact, it is not. An efficient and accurate information push is realized by the recognition of user needs and interests by the algorithm recommendation system (Benkler, Faris & Roberts. 2018). The beginning of deep learning of the algorithm system is to operate in strict accordance with the preset principles, and the data screening and supply are implemented manually. If the data used for training has some tendency, the algorithm recommendation model will also be biased after a period of learning.

 

Taking artificial intelligence as an example, its goal is to realize the intellectualization of machine learning. In the process of deep learning, that is, the interaction with the surrounding environment, the algorithm will inevitably be affected by external factors. When interacting with users, the algorithm cannot decide what data users will input, retain or delete some data, and can only passively use various data provided by users and external environment for in-depth learning. If the object interacting with the algorithm provides new data that enriches bias, the original fair algorithm will be alienated into a problem algorithm after deep learning. Microsoft once launched a chat robot Tay on twitter. It learns through dialogue with humans. Due to the adverse influence of an online chat community on American 4chan website, it has become an AI with racial discrimination and gender bias (Neff & Nagy, 2016).

 

More Far-Reaching Impacts

The actual impact of algorithm bias and discrimination may also be multi-dimensional. We media or open shopping platforms with a large number of content aggregation platforms have the same right to release information to the society as institutional media. Therefore, this influence not only affects the deterministic results that people have to accept, but also affects the cognition of society itself.

 

First, it will bring people wrong cognition. In the current environment, it is difficult for us to realize a comprehensive and objective understanding of the world. As a part of the subjective world, prejudice works by affecting people’s judgment and reasoning. It affects our evaluation and memory of certain things, and even strengthens and maintains the cognition of its inherent bias rationality. However, from the essence of information, it has the attribute of neutrality. As a preset attitude, algorithm bias integrates wrong or biased judgment into the communication process (Andrejevic, 2019). The media, or new media methods, make these information and data become a “data relationship”. This new interpersonal relationship can extract data for commercialization and become an extracted “open” resource for social life around the world (Couldry, et al., 2019). Through the Internet and interpersonal re dissemination, prejudice can spread rapidly and widely, and its harm has caused the confusion of information dissemination. The more serious situation is that distorted cognition and attitude mislead social psychology and become a potential factor for social estrangement and social conflict.

 

Second, it challenges users’ right to know and information choice ethically. The application of the algorithm in social media makes the content received by the user filtered by the social platform, makes information selection for the user to a certain extent, and dispels the user’s message selection right. The new generation of media people who can skillfully control the news release channels with technology rely heavily on the massive quantitative information about existing and potential readers. They think that readers are ‘algorithm audiences’, they have extremely easy to identify needs and desires, and they can easily identify and meet them with appropriate algorithms (Morozov, 2011) For example, the recommendation algorithm adopted by Amazon or Facebook filters out information irrelevant to user preferences according to the implied value preference, highlights the demand of what users want to know, and ignores the demand of what users should know according to ethical standards (Moore & Tambini, eds.). This practice actually damages the user’s right to know and information choice, and reduces the user’s humanistic value judgment and social responsibility consciousness. Algorithm recommendation not only meets the personalized needs of users, but also narrows the user’s information contact surface and forms a relatively closed space. In this space, the original information and views are further confirmed and strengthened, which reduces the diversification of information dissemination, narrows the information and increases the information island in the society. To this end, in December 2018, the EU high level expert group on artificial intelligence published an ethics guidelines for trustworthy AI. The draft not only proposed that AI should not cause physical and mental harm to the public, avoid bias and discrimination caused by algorithms and data, but also ensure that the public has full independent decision-making rights (Smuha, 2019). Users’ right to know and choice is mainly reflected in their ability to freely control legal information according to their own wishes. In essence, this is a manifestation of respecting people’s autonomy. For the algorithm, it does not interfere with the user’s freedom of information choice. But the bias and discrimination of this formal algorithm are constantly crossing the border.

 

A Long Way Worth Going

Although the algorithm itself is objective and neutral, the people who make and use the algorithm participate in the production and release of information in the process, and the challenges brought by this process also continue to affect the human society itself. Although the concepts of information society, digitization, big data and so on have been gradually applied to the real social fire, as the product of the rapid development of technology, when people are identifying a specific information or choice, the role of the algorithm is still strange and mysterious to them. Therefore, for algorithm designers, operators, users and users, we should pay more attention to the values represented by the algorithm, and even incorporate it into the professional skill training and general education system, not only to ensure a fair and open algorithm platform, but also a way to promote more people to understand and recognize the impact of the algorithm on our real life in the future.

References:

Andrejevic, Mark (2019), ‘Automated Culture’, in Automated Media. London: Routledge, pp. 25-43.

Benkler, Y., Faris, R. & Roberts. H. (2018) ‘Network Propaganda: Manipulation, Disinformation, and Radicalization’. New York: Oxford University Press, pp. 3-43.

  1. Moore & D. Tambini (eds.),’ Digital Dominance: The Power of Google, Amazon, Facebook, and Apple’. Oxford: Oxford University Press, pp. 21-49.

Couldry, Nick, Mejias, Ulises, Trere, Emiliano & Milan, Stefania (2019) ‘Data Colonialism: Rethinking Big Data’s Relation to the Contemporary Subject’. Television and New Media 20(4), pp. 336-349.

Kitchin, R. (2017). ‘Thinking critically about and researching algorithms’. Information, communication & society, 20(1), 14-29.

Morozov, E. (2011). ‘Don’t be evil’. The New Republic, 242(11), 18-24.

Price Discrimination Against Existing Customers of Internet Enterprises Based On Evolutionary Game Theory’. In 2021 2nd International Conference on Big Data Economy and Information Management (BDEIM) (pp. 371-374). IEEE.

Qian, M. (2021, December). ‘Analysis On The Phenomenon of “Big Data-Enabled

Pasquale, Frank (2015). ‘The Need to Know’, in The Black Box Society: the secret algorithms that control money and information. Cambridge: Harvard University Press, pp.1-18.

Smuha, N. (2019). ‘Ethics guidelines for trustworthy AI’. In AI & Ethics, Date: 2019/05/28-2019/05/28, Location: Brussels (Digityser), Belgium.

Schroeder, J. E. (2021). ‘Reinscribing gender: social media, algorithms, bias’. Journal of Marketing Management, 37(3-4), 376-378.

Vedder, A., & Naudts, L. (2017). ‘Accountability for the use of algorithms in a big data environment’. International Review of Law, Computers & Technology, 31(2), 206-224.

Neff, G., & Nagy, P. (2016). ‘Automation, algorithms, and politics| talking to Bots: Symbiotic agency and the case of Tay’. International Journal of Communication, 10, 17.

Wachter, S., Mittelstadt, B., & Floridi, L. (2017). ‘Why a right to explanation of automated decision-making does not exist in the general data protection regulation’. International Data Privacy Law, 7(2), 76-99.

Zhao, J., Wang, T., Yatskar, M., Ordonez, V., & Chang, K. W. (2017). ‘Men also like shopping: Reducing gender bias amplification using corpus-level constraints’. Retrieved from: https://arxiv.53yu.com/abs/1707.09457