When we talk about the ethical issues of algorithms, the issue of algorithmic justice is an inescapable topic, which is often referred to as the “algorithmic discrimination” issue or the “algorithmic bias” issue.
In China, algorithmic bias and discrimination are widely discussed, and the most typical case is “big data kills old users”. It means that for a certain product or for a certain service, the price seen by old customers is higher than that of new customers (Changjun, 2018). Many Internet manufacturers will use their own user data to do price discrimination against old users. With this strategy, they can attract new users with low prices and gain more benefits from old user. For example, China’s largest ride-hailing platform Didi, was exposed by multiple people that its price varies from person to person. For a journey with the same starting and ending points, old users need to pay more than new users, and users who use Apple phones see higher prices than users who use Android phones. Meituan, China’s largest food delivery platform, was also exposed that for the same food delivery, the price seen by new users is often lower than that of old users, and the delivery fee for new users is generally lower than that of old users. This kind of phenomenon is very common in China, and due to its complexity and concealment, it is indeed difficult for consumers to defend their rights and prove their rights after they find that they have been “killed” by big data. What these companies did made many users feel that they are being deceit and anger.
At present, there are two solutions to the problem of algorithmic bias and algorithmic discrimination: the first method is to improve the algorithm’s design approach, which is dedicated to solving the problem of algorithmic bias and algorithmic discrimination from the perspective of algorithm design. Starting from the design approach, the design of algorithms should have an ethical dimension, and developers should have ethical considerations when developing algorithms, and they need to be aware of the unjust consequences that algorithms can cause, and design to avoid these possible unjust consequences become a reality. The second approach is the law and policy approach, which attempts to avoid or alleviate the problems of algorithmic bias and algorithmic discrimination by establishing and improving relevant laws, regulations and social policies. However, no matter what approach scholars take to solve the problem of algorithmic bias and algorithmic discrimination, the two concepts of algorithmic bias and algorithmic discrimination are usually discussed together. It seems that the transition from algorithmic bias to algorithmic discrimination is a matter of course. As Qingfeng Yang (2019) said in his article, “A unique concept has gradually formed in the discussion on big data and artificial intelligence bias issues, that is, prejudice (bias) is equal to discrimination”. On many occasions, algorithmic bias and algorithmic discrimination are regarded as a problem. However, is bias the same as discrimination? Does algorithmic bias necessarily lead to algorithmic discrimination? This is worth asking. In fact, the problem that needs to be solved in the field of AI ethics is the problem of algorithmic discrimination rather than the problem of algorithmic bias. Algorithmic bias is inevitable, and to some extent it is a prerequisite for the application of algorithmic technology. Algorithmic discrimination is an ethical issue in the field of artificial intelligence technology that will cause adverse consequences. Algorithmic bias is unavoidable, and the ethical issue that should really be paid attention to is the issue of algorithmic discrimination, and the developers and users of algorithms are responsible for algorithmic discrimination.
1. Algorithm bias and algorithm discrimination
The difference between algorithmic bias and algorithmic discrimination is essentially the difference between bias and discrimination, and algorithms just provide a new context for the issue. The specific concept of bias is difficult to define, “bias is a feature of human life that is intertwined with, or used interchangeably with, many different names and labels – stereotypes, bias, implicit or subconsciously held beliefs , or closed-mindedness”(Howard, 2018, p. 1521).
Gadamer (1960) also pointed out that “bias” cannot be eliminated, and the behavior that the Enlightenment has been working on to drive out “bias” is nothing but “prejudice against prejudice”. He proposed that “prejudice” is constituted by the historicality of understanding, and the historicality of understanding is inevitable and believes that human understanding activities are limited by people’s inherent prejudices, and the development of human interpretation and understanding activities is Impossible to be immune to “prejudice” (Gadamer, 1960). In addition, because “prejudice” is caused by the historical nature of understanding, it also means that “prejudice” is open to the past and to the future. The biases of the past form the biases of the present, and the biases of the present will become part of the biases of the future. That is to say, contrary to our stereotype of “bias”, “bias” is not stubborn and immutable, but rather inclusive and open as far as the concept of “bias” itself is concerned. Gadamer (1960) also distinguishes between “productive prejudice” and “prejudice that hinders understanding or leads to misunderstanding”, the former is the premise that constitutes our knowledge and understanding, and comes from history, while the latter is the activity of our knowledge and understanding. Playing a hindrance role is acquired. At the same time, although the two are conceptually distinguishable, they are indistinguishable on a factual level.
Unlike “bias”, “discrimination” is generally regarded as a legal concept in today’s society. The term “discrimination” means “any distinction, exclusion, restriction based on any characteristic of race, colour, sex, language, religion, political or other opinion, nationality, ethnic or social origin, wealth, disability, birth or other status or preferred conduct, the purpose or effect of which denies or prevents anyone from realizing, enjoying or exercising equal rights and freedoms” (USDOS, 2014). Zhen (2005) also pointed out in the article “On the Prohibition of Discrimination in the Human Rights Convention”: “Any form of discrimination is based on distinction, and the basis of distinction is individual characteristics, such as race, color, gender, language, religion, politics or Other opinions, national or social origin, property, birth or other status. These different characteristics of individuals constitute a precondition for discrimination, but not all forms of distinction are discriminatory, and the distinctions opposed by human rights law are based on Reasonable and subjective standards.” In short, there are two necessary factors to constitute “discrimination”, one is to treat others differently based on certain characteristics of others, and the other is to cause harm to the discriminated person as a result.
“Discrimination” is based on “bias,” but “bias” does not necessarily lead to “discrimination.” The biggest difference between “discrimination” and “bias” is that “discrimination” is a kind of behavior, which will inevitably lead to an adverse consequence; while “bias” is only a cognitive state, which can be either positive or negative, or even value-neutral, where only part of the negative “prejudice” may turn into “discrimination”. A process of discriminatory behavior can be written as “prejudice-cognition-judgment-behavior-discrimination”. As a rational person, especially in the field of morality, a reflection link should be added between judgment and behavior, and the previous judgment should be re-judged in consideration of the influence of prejudice on people’s cognition. Minimize prejudice that directly translates into discrimination. In short, prejudice precedes discrimination, and only part of negative prejudice will turn into discrimination, prejudice can be either positive or negative or even neutral, while discrimination can only be negative.
2. Algorithmic bias and its sources
There are three main sources of algorithmic bias. First, bias in the data. There is a very famous “GIGO” law in the computer field, that is, “garbage in, garbage out”, which means that if you input a garbage data, then the output result of the computer can only be garbage. When your input data is implicitly biased, then your output algorithm must also be implicitly biased. Data in everyday life often contains implicit biases inherent in human society for a number of reasons. The so-called “implicit bias” refers to those biases that are hidden deep within us and are difficult to detect. Brownstein (2016) defines “implicit bias” as: “relatively unconscious and relatively automatic characteristics of biased judgments and social behavior.” We hold these biases without even realizing them, and the opposite can be called For “explicit biases,” even data that objectively records reality must contain these implicit biases. The objectivity of the data is only reflected in the record, it faithfully reflects the implicitness in human society. The biases are recorded and used as training data or validation data for the algorithm to make these biases manifest as algorithmic biases. For example, human society has long had a prejudice against the ability evaluation of men and women, such as “On average, men’s rational analysis ability is higher than women’s” and so on. Based on the above conclusions, it is possible that more men will be admitted to some positions that require high rational analysis ability. In this way, this bias is hidden in a series of admission information, which will be used as the training data of the algorithm. The algorithm is trained to be a biased algorithm.
The second source is developer bias. Developer bias includes both explicit and implicit bias. Developer explicit biases are those biases in the design that are intentional by developers. For example, “big data killing old users”, selling the same product at a higher price to people with relatively higher user viscosity, and selling it at a lower price to people with relatively lower user viscosity, this algorithm is usually artificially set It comes out rather than being automatically learned by the algorithm. As long as the developers themselves are not malicious, this intentional explicit bias is easy to detect and remove, but in fact, a lot of companies create biased algorithm for their benefit instead of their users’ rights. Furthermore, this intentional bias may or may not be beneficial, depending on how well the developer’s bias fits the facts, or rather, the developer’s intuition or experience. If the developer’s intuition is accurate or experienced, it is possible to make the algorithm program more efficient by bringing in the developer’s bias, which is a beneficial bias.
Thirdly, the bias of the algorithm itself. Diakopoulos (2015) proposes that “the use of the algorithm itself may be a form of discrimination from the perspective of the principle of the algorithm. The characteristics of the algorithm, such as sorting by priority, classifying processing, association selection, filtering and exclusion, etc., make the algorithm itself a kind of differential treatment system”. In other words, the principle of algorithm operation itself is an act of “labeling”, and these “labels” are undoubtedly an explicit or implicit bias, and the algorithm must therefore be biased. In this sense, a person is regarded as a “data subject” and has a “data identity”. However, this approach is not unique to algorithms, it is actually the learning and imitation of human cognition by algorithms. The so-called “categories” and “labels” are essentially the same, and both are what Gadamer calls “prejudice”. As has been demonstrated before, human cognitive activities are inherently inseparable from bias, and so are algorithms. It can be said that algorithmic bias is the premise for the application of algorithmic technology.
To sum up, like human bias, algorithmic bias is inevitable. After all, while subjective and explicit biases are easy to identify and eliminate, it is the implicit biases that make algorithmic biases so pervasive that they are the preconditions for our understanding and interpretation of the world. , which is also a necessary condition for the realization of algorithm technology.
Who should be responsible for this
Just as bias can lead to discrimination, algorithmic bias can lead to algorithmic discrimination, but not all algorithmic biases lead to algorithmic discrimination. As has been demonstrated, algorithmic bias is inevitable. The algorithm will eventually output a biased result but this doesn’t mean that algorithmic bias is necessarily bad, as we mentioned before about “bias advantage”, algorithmic bias can sometimes even be a good thing. Only those algorithmic biases that express negative connotations are likely to lead to bad outcomes, which translate into algorithmic discrimination. In general, the reason why algorithmic bias is transformed into algorithmic discrimination is ultimately because of the role of people.
Most algorithmic discrimination is ultimately human discrimination. People make decisions without reviewing the biased results of the algorithm, which constitutes the so-called algorithmic discrimination. From the point of view of the principle of the algorithm, the algorithm based on big data can only provide the correlation conclusion, that is, the correlation indicated by the data. In essence, what the algorithm does is inductive reasoning, and the output result is just an inductive hypothesis. It is the users of the algorithm that make the discriminatory solutions and suggestions given by the algorithm finally turn into discriminatory behavior. Therefore, algorithmic discrimination should have a clear responsibility. Although algorithmic bias is unavoidable, it does not necessarily lead to algorithmic discrimination. Those who really design and use algorithms should be held responsible for the discriminatory behavior. The key to avoiding algorithmic bias from algorithmic discrimination lies in regulating people, especially the developers and users of regulating algorithms.
To sum up, those responsible for algorithmic discrimination are the developers and users of algorithms, and they are responsible for algorithmic discrimination.
To avoid algorithmic discrimination, we need to face up to algorithmic biases: on the one hand, we should always maintain a cautious attitude towards algorithms, and when developing and applying algorithms, we should prohibit the explicit personal biases and strive to identify explicit or implicit biases in them, avoid algorithm discrimination, do not blindly follow the solutions and suggestions given by the algorithm, but think twice; on the other hand, the applicable areas of autonomous decision-making algorithms should be carefully selected, and those areas that may directly cause discrimination should be avoided as much as possible. When using autonomous decision-making algorithms, developers and users of algorithms should ensure that there are corresponding supervisory mechanisms and compensation mechanisms.
References
Cui, J. (2019). The crisis and response of equal rights protection under algorithmic discrimination challenge. Legal Science, 29-42.
Chen, H. (1996). Prejudice: From illegal to legal. Journal of Xiamen University, 25-30.
Diakopoulos, N. (2016). Algorithmic accountability: Journalistic investigation of computational power structure. Digital Journalism, 765-786.
Gadamer, H. (2014). Truth and method. London: Bloomsbury Academic.
Howard, A., & Borenstein, J. (2018). The ugly truth about ourselves and our robot creation. Science and Engineering, 1521-1536.
Yang, C., & Luo, X. (2018). A preliminary study on the comprehensive treatment of algorithmic discrimination. Science and Society, 4th ser., 8-19.
Yang, Q. (2019). Whether data bias can be eliminated. The Study of Dialectics of Nature, 109-113.
Zhu, Z. (2005). Discuss the prohibition of discrimination in human rights conventions. Law Review, 143-150.