Ethical issues with artificial intelligence

 

Introduction

From the Matrix to Ex Machina, countless novels, films, and other literary works have repeatedly discussed one theme: when artificial intelligence with autonomous consciousness emerges, will humans be able to live in harmony with the new beings they create? Will the dominant role in society shift from humans to artificial intelligence? At present, there is still no unanimous conclusion on the consequences of AI applications. However, with the rapid development from weak AI to strong AI(Wei, 2019), the positive and negative effects of AI on human economic activities and daily life are emerging, while greatly improving the efficiency of human society.As we use online shopping search engines to make our shopping more efficient, are our shopping decisions being induced by artificial intelligence algorithms? As we become accustomed to interacting on social media, do we find that real interaction time with friends and family is being squeezed out? When we use AI to recommend work methods and estimate work completion times, are we really balancing increased productivity with our personal needs? And in the process, do we as employees have the scope to use our initiative? Do we have the right to say no to the algorithm? From daily life to economic development, the application of artificial intelligence brings opportunities and efficiency, but at the same time is accompanied by the risk of uncertainty and social ethical dilemmas(Cath, 2018). It is difficult to adjust the existing social and ethical regulations to the emerging intelligent society(Floridi, 2018), and the booming intelligent economy has the potential to lead to social crises due to the rush for quick profits. The social ethics of AI is a pre-requisite topic for AI research and development, and we should critically examine the ethical regulation of AI as an ideological guideline for the future development of the truly artificial intelligence.

 

 

https://www.albawaba.com/business/what-pink-tax-and-how-does-it-affect-middle-eastern-women

The “pink tax” – who shapes our personal preference and gender identity?

In terms of decision aids, machine learning algorithms are exacerbating social injustices based on gender expectations while providing precise and personalised services to the public. In digital era this adds to a deeper debate: do we form our own identities based on our own choices, or do algorithmic choices shape our personal preferences and gender identity?

https://www.mings-fashion.com/salon-%E5%85%A9%E6%80%A7%E5%B9%B3%E7%AD%89-pink-tax-303955/

The pink tax, also known as the gender tax, refers to products that are functionally identical or similar, but are more expensive for female consumers, women need to pay more for the same products and services than men(Guittar et al., 2021). From the economic perspective, the Pink Tax is a form of price discrimination and a means of maximising profit for businesses. The premise and basis of the Pink Tax in digital era is that artificial intelligence, with its collection of consumer identification information and shopping history, can tailor a gender-specific sales strategy to the individual consumer. When a female consumer opens a shopping site and searches for the appropriate item, the intelligent recommendations are often fancy, flashy and sophisticated stuff based on gender stereotypes, with certain items selling for double the price of their counterparts for an additional pink colour category. In this case, women either have to spend more time sifting through the items that really fit their needs or pay a premium for their gender.The gender differentiation impose on products makes female consumer have more difficulties in price-comparisons. If women themselves are not aware of this gender expectation and do not actively adjust their browsing preferences, recommendations for overpriced items will follow, reinforcing the extent to which women are catering for overpriced items. Personalised recommendation algorithms based on gender bias reinforce existing impressions, solidify individual behaviour and widen the gender preference gap. In this process of information-receiving and decision-making, women receive information that is selected based on the AI’s pre-determined gender expectations. The pink tax in the digital era builds on AI’s pre-set gender stance, further reify gender structures and inequality in society(Guittar et al., 2021).

The process of forming “rule set” based on “big data set” and applying them to specific scenarios by machine learning algorithms of AI follows the basic logic of inferring individual behaviour from overall characteristics(Barocas & Selbst, 2016). At the same time, the uncertainty and complexity of human society objectively determines the incompleteness of ‘big data sets’, and the lack of data leads to biases in the rule set formed by algorithms that may reinforce existing gender discrimination. If we extend this to other dimensions, such as education and socialization, is gender awareness and identity dominated by humans or by AI? This is the structural paradox of AI, and we still need to think carefully about how to retain our personalities in the so-called personalized, precisely recommended information.

 

 

https://www.technogone.com/facebook-owned-apps/

The app matrix of the Internet’s leading companies reaches a monopoly of users

Artificial intelligence algorithms may undermine the competitive market environment in ways that are less easily detected and proven, such as using algorithms to collude to form horizontal monopoly agreements or hub-and-spoke agreements. Artificial intelligence will profile users, determine their needs and predict their decisions based on their past behaviour. When a user signs up for a new platform, AI can only make preliminary recommendations based on information provided by the user themselves (e.g. gender as mentioned above), using overall characteristics to infer the underlying logic of individual behaviour, but individual users are very different and many human traits cannot be identified with data and precision, formalising human constructs into computers may be another problem leading to technological bias. Human constructs such as values or intuition are often difficult to quantify, making their translation into computers difficult or even impossible (Friedman&Nissenbaum,1996).

Users who keep receiving recommendations for information they don’t like on a platform will give up on continuing to use that platform and move on to other platforms of the same type. If users give up so quickly, AI is unable to make algorithmic corrections based on user behaviour and cannot constitute a positive cycle, so user retention is reduced.So how are companies taking steps to get around this paradox, given the need for business benefits?

The solution is to share the user profile already constituted by other platforms of the same company for analysis (like the product matrix constituted by Facebook and Tencent), so that users can go directly through their WeChat account to use other Tencent product, such as QQ music, and associate friends with the original platform account. This involves invading users’ privacy, using algorithms to collude to form a data monopoly, undermining the competitive market environment and reaching a winner-takes-all situation. For reasons of convenience, users move between different platforms of the same company, and companies reach a literal monopoly of users. On the other hand, pushing the same type of information may contradict the original purpose of using a new platform (to get a different perspective) and solidify the information cocoon. Early intervention based on predictions can certainly be proactive, but intervening before it happens is suspected of being overly control.

 

 

https://hellonimbly.com/7-tips-for-maximizing-takeaway-and-delivery-services/

Digital control and Conjugate Domination-Productivity Relations Under AI Systems

In ‘Labour Order under “Digital Control” – A Study on Labour Control of Takeaway Riders’, Dr Chen Long introduces the concept of digital control, which is a shift from physical machines and computer equipment to virtual software and data. Artificial intelligence makes labour order possible by subliminally collecting and analyzing takeaway delivery man data, and then applying the results back to the delivery man.(Long,2020)

During his six months of field research in the role of a delivery man, he discovered that there was a shorter route outside the delivery route planned by the platform that saved a lot of time, so delivery man kept taking this route they had discovered. After a period of time, the AI found out about this based on tracking and analysis of the deliveryman’s delivery data, and adjusted the delivery times to be shorter. The time the deliveryman would have saved by discovering the shortcut was then wiped out by the platform with its algorithm. Facing the algorithm, there is no way for the deliveryman to change their situation by relying on their own efforts, because according to the big data measurement, AI can always find a more optimized solution. On the contrary, the harder they try, the more tired they become, and the more they try to get rid of the algorithm by themselves, the more they are trapped in it. Moreover, the takeaway workers have objectively become a presence that helps the platform to optimise the algorithm.

This matter reflects two issues.One is the privacy of the deliveryman’s own trajectory. Outdoors, the AI can track the rider’s trajectory through the smartphone’s GPS signal. Indoors, the rider’s data is recorded through the merchant’s WiFi network and indoor positioning base stations. This includes information such as movement status, arrival time and pick-up time. Artificial intelligence trains machine learning algorithms by acquiring massive, multifaceted and real-time data, and the deliveryman’s private information is inevitably collected and stored in the process. Some scholars believe that machine learning algorithms have become self-producing by forming rule sets and applying them to perception and decision-making in different scenarios through a self-training process based on big data, and have freed themselves from reliance on human expressive abilities, and the resulting The unexplainable concerns and self-reinforcement dilemmas pose a risk to personal privacy and information security. (Veale et al., 2018) Due to the nature of their profession, deliverymen do need to provide information about their delivery trajectories to the platform and to customers, but this does not mean that AI can use this data for algorithmic optimization, squeezing them to their limits. Digital control not only weakens deliveryman’ willingness to resist and invade their space to exercise their autonomy, but also makes them unwittingly involved in the process of managing themselves. Digital control also shows that the means of capital control are moving not only from authoritarian to hegemonic, but also from the physical to the virtual.

On the other hand, if a certain group of delivery workers are not aware of this shortcut, but are faced with very tight delivery deadlines, they are forced to take dangerous actions (not driving on the prescribed roads, running red lights,etc.) to complete their tasks on time, and the design flaws and value-setting problems of AI systems for the need to maximise efficiency may pose a threat to the right to life and health. Moreover, there is still a debate in academia as to whether AI has the ability to assume full moral responsibility, which makes it more difficult to determine the attribution of rights and responsibilities in AI accidents. Accidents caused by AI do not involve the subjective intent of humans, but the objective behaviour is pre-determined through algorithms compiled by humans, and the algorithms themselves are capable of self-reinforcement. The complex relationship between artificial intelligence and humans in design and operation makes it difficult to identify the objects of ethical and legal regulation.

The productivity relationship under artificial intelligence management needs to be redefined. In this case AI appears to serve the deliveryman, planning the best route, but in reality it serves the enterprise, taking away the deliveryman’s initiative for the  purpose of maximising corporate profit. The author argues that AI and the deliveryman have entered into a relationship of conjugate domination, which raises new concerns about whether AI-assisted productivity gains are accompanied by a deeper level of exploitation.

 

 

Conclusion

Mankind is experiencing the “Fourth Industrial Revolution” with Artificial Intelligence (AI) as the core driver. It has a wide and profound impact on human economic activity. The first industrial revolution brought with it great leaps in productivity as will as problems such as child labour and overtime-working, and we need to remember the lessons of history and make the transition to this period of change as smooth as possible. The current debate on AI focuses on government and industry concerns and the goals of innovation and economic growth at the expense of social and ethical issues (Marda, 2018). We need to balance the relationship between humans and AI, establish the right values for the human-machine relationship and clarify that the relationship between humans and AI is not an abstract relationship of who controls who, but a contextual relationship that needs to be dynamically judged in relation to the stage of technological development and specific application scenarios. Based on the reflection and foresight on the social and ethical dilemmas of AI, we should aim to promote human well-being, exploit the positive effects of AI applications, and prevent the negative effects, as a guideline for the future development of AI.

 

 

Reference

1.Barocas, S., & Selbst, A. (2016). Big Data’s Disparate Impact. SSRN Electronic Journal. https://doi.org/10.2139/ssrn.2477899

2.Cath, C. (2018). Governing artificial intelligence: ethical, legal and technical opportunities and challenges. Philosophical Transactions Of The Royal Society A: Mathematical, Physical And Engineering Sciences, 376(2133), 20180080.

3.Floridi, L. (2018). Soft ethics, the governance of the digital and the General Data Protection Regulation.Philosophical Transactions Of The Royal Society A: Mathematical, Physical And Engineering Sciences, 376(2133), 20180081.

4.Friedman, B., & Nissenbaum, H. (1996). Bias in computer systems. ACM Transactions On Information Systems, 14(3), 330-347. https://doi.org/10.1145/230538.230561

5.Guittar, S., Grauerholz, L., Kidder, E., Daye, S., & McLaughlin, M. (2021). Beyond the Pink Tax: Gender-Based Pricing and Differentiation of Personal Care Products. Gender Issues, 39(1), 1-23. https://doi.org/10.1007/s12147-021-09280-9

6.Long Chen. (2020). Labor Order under “Digital Control”: A Study on Labor Control of Food Delivery Riders. Sociological Research (06), 113-135,244.

7.Wei, L. (2019). Legal risk and criminal imputation of weak artificial intelligence. IOP Conference Series: Materials Science And Engineering, 490, 062085. https://doi.org/10.1088/1757-899x/490/6/062085

8.Marda, V. (2018). Artificial Intelligence Policy in India: A Framework for Engaging the Limits of Data-Driven Decision-Making. SSRN Electronic Journal. https://doi.org/10.2139/ssrn.3240384

9.Veale, M., Binns, R., & Edwards, L. (2018). Algorithms that remember: model inversion attacks and data protection law. Philosophical Transactions Of The Royal Society A: Mathematical, Physical And Engineering Sciences, 376(2133), 20180083. https://doi.org/10.1098/rsta.2018.0083