the Critical Analysis of Ethical Issues with Artificial Intelligence

In recent years, with the rapid development of information technology, many artificial intelligence technologies applied all over the world. AI has penetrated into people’s daily lives and is changing lives in a variety of predictable ways. For example, in the medical field, AI can provide patients with more accurate diagnoses and earlier disease prevention. In the agricultural field, AI can improve farming efficiency, help save resources and reduce pollution. In the transportation field, a large amount of data is collected to contribute to more efficient traffic navigation. The impact of AI technology is two-sided. While bringing major development opportunities for economic development and social progress, it also brings some profound challenges to social governance. Ethical issues surrounding the development of AI require paying more attention, how to avoid these problems has also become an important research topic. This paper will critically analyze the social impact and ethical issues brought about by the development of AI.

The article will be divided into four main parts. The first part will briefly introduce the current state of AI development in the context of COVID-19. Then, it critically analyzes and discusses three ethical issues arising from the development of AI technology in detail, including employment status in the workplace, racial and gender bias in AI, and privacy and security of user data. Finally, a brief but comprehensive summary will be given.

1. The current state of positive AI development
Artificial intelligence is booming around the world and has become one of the strategic industries leading the future society. With the continuous acceleration of AI research and development, the popularity of the global AI industry has gradually increased, and the market size has grown rapidly. Since the outbreak of COVID-19 in 2019, it has had a huge impact on businesses and human beings. At the same time, the outbreak of the epidemic has provided an opportunity for AI development, which has become one of the most important tools in the fight against the virus. For instance, the Chinese company Infervision has developed AI-based coronavirus diagnostic software that uses CT scans to detect lung problems. The software was originally used to diagnose lung cancer, and after the COVID-19 outbreak, at least 34 Chinese hospitals used the technology to screen 32,000 confirmed cases (Nast, 2020). In Italy, a company has also developed a smartphone software that tracks the whereabouts of positive patients by tracking user information at any time and warns people who may have been in contact with positive patients. In order to protect public health, the software must collect real identifiable information on all users and keep track of all their travels. Although the rapid development of these AI technologies can effectively assist medical staff to detect and track the virus in the context of the global coronavirus environment, and the scale of the AI industry is a growing trend, a comprehensive assessment of AI technologies is also necessary.

2. Critical analysis of ethical issues in AI
This part will critically analyze the ethical issues brought about by artificial intelligence technology, mainly focusing on three aspects, namely employment issues in the workplace, biases based on big data collection and algorithms, and user privacy protection.

2.1 The employment impact of AI
Early AI was based on pre-set programs to perform specific tasks that required a lot of computation, such as playing chess and google human-machine authentication. But now, with the help of the latest deep learning technology, AI technology has been fully automated, and can independently complete and solve some real-life tasks, such as driverless cars, factory assembly line production, and CT-assisted diagnosis. AI is believed to fundamentally change the work environment and the way people work. Analyze this from a positive perspective, taking medical AI in the context of China as an example. Under the coronavirus outbreak, China’s Alibaba Group has developed a new AI system to identify the coronavirus, which can diagnose covid-19 within 20 seconds with 96% accuracy (Li, 2020). CT machines typically require 300-400 chest scans per patient to obtain a diagnosis, such a huge amount of data that even an experienced doctor would cost 15- 20 minutes to diagnose. During this time, AI systems were used as diagnostic assistants for doctors, quickly helping doctors analyze information and give diagnostic opinions. In the context of China’s environment, it is common for patients to have long waiting times for medical treatment, high medical expenses, and insufficient medical resources. For doctors, due to China’s huge population and workload, reports of some young doctors dying because of loneliness and fatigue have also occurred from time to time. Based on the support of the government, the application of medical AI has helped doctors effectively reduce the basic workload and improve their efficiency. Because AI can provide continuous work 24/7, and by providing diagnostic advice, the rate of misdiagnosis and missed diagnosis can also be reduced.

In addition to the positive effects of AI, on the contrary, AI is believed to replace humans in certain types of jobs. In the early development of AI, Leontief, a Nobel laureate in economics, predicted that a large number of workers would be replaced by artificial intelligence in the next 30 to 40 years, resulting in a huge employment problem (Cheng, & Hongpeng, 2018). Doctors rely on solid professional knowledge and long-term accumulated clinical experience to make correct diagnoses for patients, while AI based on a huge amount of data and algorithms can make diagnoses faster and easier than doctors. AI can search huge data and past cases to provide the highest quality diagnostic opinions and more targeted treatment plans. Although the efficient and precise work of AI has greatly improved the efficiency of doctors, it also makes the grassroots medical staff face the threat and pressure of being replaced. In addition to the medical industry, other industries are also facing the same problem. The number of cashiers in supermarkets is decreasing, and they are gradually being replaced by unmanned machines, and driverless cars have caused many taxi drivers to lose their jobs.

Positive scholars believe that AI will also create new jobs after replacing human jobs. The Future of Jobs Report 2020 metioned that, AI will disrupt 85 million jobs by 2025, and job demand for skills such as data entry, accounting, and administrative services is shrinking. But at the same time, as economic and job market conditions continue to change, 97 million new jobs will be created in the next five years. This trend shows that repetitive manual labor is easy to be replaced, and the newly created positions will have higher requirements for knowledge and operation. After the job of the taxi driver is replaced, the demand for the job of driverless car repairman will increase. However, because drivers do not meet their functional requirements, they are not qualified for these jobs. Despite the increase in new jobs, the number of unemployed will inevitably increase. In the long run, AI will be a powerful driving force for social productivity and can create a large amount of new social wealth, but there is still a problem of how everyone can benefit from it, instead of making some people gain wealth, others part of the people lost their jobs.

2.2 Algorithmic bias
Data bias has become a key issue for ethical reflection in the field of big data and artificial intelligence. Objective data and rational algorithms appear to be neutral on the surface but in fact, AI is neither artificial nor intelligent. AI is driven by data. Therefore, it will produce non-neutral results. Bias in AI includes two types, one is the bias caused by the lack of complete data, and the other is the bias that reflects the existence of the people who designed the system and created the data. Data bias occurs when data is incomplete and may not be representative. In the face recognition system in the US, white males have the highest recognition accuracy, with an error rate of less than 1%, while dark-skinned female faces have the lowest recognition rate, with an error rate of 35% (Kaifei, 2020). This is racial bias due to incomplete and uneven data. In the common face recognition dataset LFW, 77% of male faces are included, and more than 80% are white males. It leads to a more in-depth analysis that accounts for a large proportion of data when working in AI, which eventually leads to the generation of data bias.

And AI reflects the bias of those who designed the system. Amazon’s algorithm for human recruiting functions is gender biased. The system automatically filters resumes that contain words such as ‘female’ and automatically downgrades the candidates who graduated from women’s colleges (NYT reports, 2019). The reason for this gender-biased situation is that the algorithm is based on the resume data of Amazon employees who have hired them over the past decade. And Amazon’s employees are almost all males, so the algorithm came up with the criteria for judging men more trustworthy at work. Such gender bias begins with the initial division of labor in human society. Women are responsible for picking fruits and babysitting, while men are responsible for hunting and protecting camps, thus forming a gender division of labor. It has continued into the Internet era, with men more involved in technology development and fewer opportunities for women to participate. This inevitably leads to inherent gender biases reflected in AI technology, algorithmic biases are unconscious but truthful reflections of real biases in society.

Technological developments have exacerbated biases to a certain extent. Algorithmic will lead information asymmetries and the ‘black box’. Search algorithms is not neutral. It always recommends relevant content that users have searched for and watched. It makes users being separated from different viewpoints and then sequestered in a filter bubble. The end result is to fundamentally change the way users are exposed to ideas and information, providing automated propaganda that results in the flow of information to users based on existing views and perceptions. The precise push and intelligently targeted dissemination of information will hinder users’ comprehensive cognition of facts, thereby exacerbating prejudice.

2.3 Information security
AI technology contains a large amount of information, and users can use the information base of AI to conduct information interaction, resource sharing, and data queries. With much news of information leaks and privacy violations being reported, more users are questioning their privacy and security issues when it comes to the use of AI. There are three main problems in the protection of privacy: whether the data collection is subject to the informed consent of the user; who has the right to obtain user information; if the user information is leaked, how to determine the responsibility. The age of AI complicates the concept of privacy. When users are faced with agreements that are difficult to access and must accept, they can only click accept in a hurry, not realizing what privacy rights they may be giving up. These usage data will be used to provide useful services, and it may also bring potential risks. The widespread application of AI technology in more fields makes user information more detailed and private. Such as when using online AI medical care to see a doctor, patients have to provide their own identity information, health status, and previous disease treatment. These data involve special information of great value such as the user’s biological genes. Once leaked, it will bring economic and psychological losses to the patients. As an authoritative social activity, medical treatment will also have a negative impact on social stability. For AI, the more detailed information, the better the smart device can understand the user’s needs, and the more convenient and personalized services can be provided.

Overall, this paper introduces the current state of AI development and critiques its three ethical issues in detail, technology-substituting worker employment, racial and gender bias, and potential privacy leakage risks. Human society has been unable to leave artificial intelligence technology. So this requires people to be neither blindly optimistic nor overly worried when thinking about AI. Actors need to deeply recognize the enormous benefits AI brings, and they must also critically identify the ethical issues and causes. Gradually build a relatively fair and healthy AI technology, and let it play a positive value. AI is a technology that can benefit society, as long as it is ethical, sustainable and respectful of its users.

REFERENCES
Mirbabaie, M., Brünker, F., Möllmann, N. R. J., & Stieglitz, S. (2021). The rise of artificial intelligence – understanding the AI identity threat at the workplace. Electronic Markets. https://doi.org/10.1007/s12525-021-00496-x
Nast, C. (2020). Chinese Hospitals Deploy AI to Help Diagnose Covid-19. Retrieved from https://www.wired.com/story/chinese-hospitals-deploy-ai-help-diagnose-covid-19/
Li, C. (2020). How DAMO Academy’s AI System Detects Coronavirus Cases. Retrieved from https://www.alizila.com/how-damo-academys-ai-system-detects-coronavirus-cases/
Pennisi, M. (2022). How cell control and coronavirus infection tracking works. Retrieved from https://www.corriere.it/tecnologia/20_marzo_18/coronavirus-controlli-celle-telefoniche-tracciamento-privacy-223ea2c8-6920-11ea-913c-55c2df06d574.shtml
Kong, X., Ai, B., Kong, Y., Su, L., Ning, Y., Howard, N., … Fang, Y. (2019). Artificial intelligence: a key to relieve China’s insufficient and unequally-distributed medical resources. American Journal of Translational Research, 11(5), 2632–2640.
World Economic Forum. (2020). The Future of Jobs Report 2020. Retrieved from https://www.weforum.org/reports/the-future-of-jobs-report-2020
Kaifei, X. (2020). Who is behind the bias in AI? CCB. Retrieved from http://www.xinhuanet.com/tech/2020-07/15/c_1126238388.htm
Study shows gender, ethnic bias with Amazon’s Rekognition tech, NYT reports. (2019). The Fly, 0–.
PASQUALE, F. (2015). The Black Box Society: The Secret Algorithms That Control Money and Information. Harvard University Press. http://www.jstor.org/stable/j.ctt13x0hch
Andrejevic, M. (2019). Automated Media (1st ed.). Routledge. https://doi-org.ezproxy.library.sydney.edu.au/10.4324/9780429242595
Zhou, C., & Hongpeng, H. (2018). Ethical and social challenges posed by artificial intelligence. People’s Forum. Retrieved from http://www.rmlt.com.cn/2018/0125/509838.shtml