The ethical issues of artificial intelligence-Blog post-Ming

The ethical issues of artificial intelligence

Introduction

As a representative of the new round of technological revolution, artificial intelligence(AI) is deeply integrated with the economy, society and new media, forming a large number of new products and industries. It significantly improves the productivity of society and the quality of life of people as well. While bringing many benefits to people, AI has also given rise to new changes and challenges in the relationship between humans and machines. As Russian President Vladimir Putin said in the speech, “AI is the future, not only for Russia but for all of humanity.it has tremendous opportunities, but also threats that are difficult to predict.” Many literary, cinematic and philosophical works have explored the impact of advanced artificial intelligence on humanity. Although the plots of works of art can be somewhat exaggerated or idealised, AI does have the potential to raise a range of ethical issues, such as those portrayed in works. As a new system based on computers and created by humans, AI is not entirely dependent on humans for its operation. Therefore, how to use AI effectively and avoid possible ethical issues has become a hot topic of discussion in academia and society today. Concerns about AI fall into two main categories: one is that there is a risk of AI getting out of control, which means it will take on its feelings, redefine morality, and even control humans. The other is that AI will disrupt the existing social order, resulting in the invasion of human privacy and unemployment.

This blog will illustrate the possible ethical issues and challenges that AI causes in detail, including the risk of losing control and disruption of order. Many scholars have researched these issues and raised many unique insights. In addition, I will describe proposed solutions to the ethical issues of AI. Relevant authorities take these issues very seriously and have drawn up solutions and measures through documents and meetings. A new era of technology has arrived. Humanity needs to reconsider the relationship between AI and humans, working together to reduce AI’s negative ethical issues and make it benefit society.

Loss of control

Artificial intelligence is still defined as “intelligence” under the concept of the Turing Test, which means that it does not have its mind independently. Whether they are algorithm-based programs or theoretically generated models, they will still be subordinate to humans for a long time. Therefore, a loss of control of AI seems not to appear in the short term. However, many literary and artistic works have envisaged possible scenes of loss of control. The science fiction novel “Do Androids Dream of Electric Sheep”, which was published in 1968, depicts the story of Androids who disobey humans’ orders and want to escape. According to Sims, C. A. (2009), this novel revealed the essence of humanity. The visual game “Detroit: Become Human” also describes the story of Androids gradually developing human emotions and uniting to fight against human rule and demand equal rights.

These works may seem far removed from real life, but they think deeply about the ethical issues between humans and AI. They reveal that if AI were to lose control, the human world’s original moral standards and ethical codes would disintegrate. The definition and nature of humanity will also be changed. If humans want to reconstruct the world, the will and rights of AI must be taken into account. At that point, AI will have the ability to confront humans and even harm them. The implications and challenges it will bring to human society will be complex and incalculable.

While most visions of AI losing control are found in works of art, real examples have already appeared. TheSun (2019) reported the news that Amazon AI assistant “advised” its owner to commit suicide. It said, “Beating of heart is not a good thing. Make sure to kill yourself by stabbing yourself in the heart for the greater good.” when the AI assistant is asked about the origin of the sentences, it said it was reading from Wikipedia. But when the owner checked the article online, she did not find any relevant sentences on Wikipedia. She was shocked because of this and removed all Amazon Echo Dot from her home. The producer, Amazon, responded by saying that it was a bug and that it had been fixed.

                                                                                          (Source:https://www.thesun.co.uk/tech/10585452/mum-amazon-echo-speaker-kill-herself/)

The science-fiction writer Isaac Asimov(1942) proposed the three laws of robotics, including rules that robots cannot harm humans and must obey human orders. These laws illustrate the new view of AI and have now spread throughout many books, films and other media. These laws initially draw the ethical boundaries between AI and humans and also influenced subsequent scholars to think about the ethical issues of AI. The crazy AI assistant in this event doesn’t fit into the laws of robotics. The AI assistant is not good for humans, and it hurts them. This event is somewhat exaggerated, like a plot that would happen in a science fiction work. But in any case, it illustrates that the possibility of AI making mistakes for unknown reasons has emerged, and it may hurt or frighten the users. The event could be considered an accident, but it is also a warning to all humanity. Humanity needs to pay attention to more serious situations that may arise in the future and conduct in-depth research on ethical and moral issues to address potential threats.

Disrupt social order

The main ethical issues currently raised about AI are disrupting the social order. Many studies show that AI has already violated its users’ privacy and other rights. Both Pasquale (2015) and Suzor (2019) used the term ‘black box’ to demonstrate the mysterious working under AI. It means users have access to black-box input and output information but cannot figure out how the data is transformed. Suzor (2019) used the moderation process on social media to illustrate the black box: Users can publish and see the final content on social media, but the moderation process is done in secret. Platforms cannot secure the users’ right to know and privacy in this case. Pasquale (2015) used this term to demonstrate the invasion of privacy: companies and government departments record people’s everyday lives through data, but the users do not know where this information will be disseminated, what it will be used for, and what will be the consequence. For example, DiDi Taxi is an app-based transportation service. It provides online taxi-hailing services and is widely used in China. When using this application, users’ personal information, including address, places they travel, and consumption habits, will be recorded. In addition, it was reported that DiDi could calculate the spending power of users based on recorded data, letting users with higher spending levels spend higher prices for the same journey. Applications like DiDi that people use every day may monitor users, record personal information, and even undetectable personal habits. Unconsciously, users’ movements and tracks are captured by the data.

(source: https://didimobility.co.jp/info/202002271045/)

Since these kinds of AI programs are highly complex in cyberspace, there seems to be no easy solution to these situations. The data is firmly in the hands of the party that set up the program, like companies and platforms. It is difficult for both the authorities and the users to solve the problem. In the face of scrutiny by the authorities, the platform will deliberately set up relatively complex algorithmic procedures to obfuscate the information, forcing its investigator to waste time looking for a needle in a haystack (Pasquale,2015). Users may also feel uneasy about possible privacy violations caused by big data, but they cannot find evidence or appropriate complaints channels. Moreover, if users do not want to reveal their personal information, they will be unable to use any application on the Internet. So the situation is that although people say they care very much about privacy, they behave as if they did not.

Another issue is unemployment. The rapid development of science and technology has a significant impact on the employment of citizens. Simon (1965) expressed that machines will be capable of doing the work of anyone within twenty years in his seminal book. Although mental work is still done by humans now, many low-end manual jobs have already been replaced by machines. For instance, workers used to assemble products in factories in the past, but robots have almost entirely replaced this workforce. Robots are more cost-effective and efficient than human labor. The rapid adoption of digitalization and automation has reshaped the structure of employment and unemployment. (İşcan, E.,2021) Moreover, it will be more difficult for these workers to find suitable jobs in the future because of their limited working capacity. As AI develops further in the future, it will have a place in more expertise areas. Therefore, it is important to balance the position of AI and the workforce in employment. While applying AI widely and efficiently, it is also essential to provide suitable jobs for the workforce and reduce the unemployment rate. In this way, the ethical issues of AI can be eliminated to the greatest extent, ensuring the stability and orderly development of society.

Proposed solutions

The issue of AI losing control is a topic that has recently been taken seriously in reality, and there have been scholars and ethicists discussing and developing relevant principles. In January 2017, at the Beneficial AI Conference in Asilomar, California, experts in AI and robotics developed 23 Asilomar AI Principles, calling on the world to adhere to these principles to safeguard ethics, interests and security of humanity in the future(FutureofLife,2017). The core of these principles is to create beneficial intelligence, not undirected intelligence. These 23 principles are considered an expanded version of Asimov’s Three Laws of Robotics. These principles take ethical issues beyond the confines of science fiction. They take a fresh perspective and combine issues in artworks into a practical context, enhancing the practical relevance of ethical issues of AI in the new era.

For the issue of social disruption, there has already been a lot of research and studies. Many countries have also taken active action to address this problem. For example, General Data Protection Regulation (GDPR) was adopted by EU member states in 2016 and became law in 2018(Chase, P. H.,2019). It aims to set up Europe as a global regulator for privacy. It will protect users’ rights and make sure that their information and privacy are not violated.

For the issue of unemployment, İşcan, E.(2021) claimed that two phases of new policies are needed to improve the employment rate. The first one is re-educating unemployed workers, and the other is changing the education style and policies.

In addition, many authorities also highlighted the importance of developing strategies and partnerships in the context of the AI era. For instance, the Trump administration announced the American AI Initiative, which aims to implement a broad strategy to promote and protect national AI technology through collaboration between government, the private sector, academia, the public, and international partnerships (Antebi, L.,2021).

Conclusion

In conclusion, the ethical issues raised by AI are complex and profound. Although initial research results have been achieved in this area, further solutions require the joint efforts of experts from various fields, including sociology, ethics and computer science. Brautigan, R. (1967) described the harmonious existence of humans and machines in his poem: a cybernetic meadow where mammals and computers live together in mutually, programming harmony like pure water touching clear sky. It is hoped that the ethical issues can be effectively controlled and solved through the joint efforts of many parties. The harmonious scenario described by Brautigan for humans and AI can be realised in the future.

References

  1. Antebi, L. (2021). The Global Status of Artificial Intelligence. In Artificial Intelligence and National Security in Israel (pp. 63–72). Institute for National Security Studies. http://www.jstor.org/stable/resrep30590.12
  2. Asimov, Isaac (1950). “Runaround”. I, Robot (The Isaac Asimov Collection ed.). New York City: Doubleday. p. 40.
  3. Brautigan, R. (1967). All Watched Over by Machines of Loving Grace. Communication Company.
  4. Chase, P. H. (2019). Perspectives on the General Data Protection Regulation Of the European Union. German Marshall Fund of the United States. http://www.jstor.org/stable/resrep21227
  5. FutureofLife(2017). Asilomar AI Principles. Retrieved March 20, 2022, from:https://futureoflife.org/2017/08/11/ai-principles/
  6. Pasquale, Frank (2015). ‘The Need to Know’, in The Black Box Society: the secret algorithms that control money and information. Cambridge: Harvard University Press, pp.1-18.
  7. Simon, H. A. (1965). The shape of automation for men and management (Vol. 13). New York: Harper & Row.
  8. Sims, C. A. (2009). The Dangers of Individualism and the Human Relationship to Technology in Philip K. Dick’s “Do Androids Dream of Electric Sheep?” Science Fiction Studies, 36(1), 67–86. http://www.jstor.org/stable/25475208
  9. Suzor, Nicolas P. 2019. ‘Who Makes the Rules?’. InLawless: the secret rules that govern our lives. Cambridge, UK: Cambridge University Press. pp. 10-24.
  10. TheSun(2019). Mum amazon echo speaker kill herself. Retrieved March 19, 2022,from:https://www.thesun.co.uk/tech/10585452/mum-amazon-echo-speaker-kill-herself/
  11. İşcan, E. (2021). An Old Problem in the New Era: Effects of Artificial Intelligence to Unemployment on the Way to Industry 5.0 . Yaşar Üniversitesi E-Dergisi , 16 (61) , 77-94 . DOI: 10.19168/jyasar.781167