Ethical issues with artificial intelligence in different industries

Introduction

With the development of technology, people’s expectations for the Internet are gradually increasing, and it is no longer limited to using the Internet to keep in touch with long-distance families or friends. Users’ expectations for using the Internet gradually go beyond the mobile phone screen and spread to the real world. As the physical world becomes more interactive and takes on some of the characteristics of online spaces. Traces of this interactive overlay appear around people in various forms designed by various corporate actors (Andrejevic, 2019). For instance, smart home products developed by companies such as Google and Xiaomi. From the perspective of the enterprise, this not only helps the company to seek more development possibilities; it also meets the demands of users. Artificial Intelligence (AI) is one of the greatest inventions of human beings in the 21st century. One of the visions of scientists for AI products to appear in the human world is to help people improve work efficiency while satisfying human-computer interaction. The use of AI in products like the smart home will increasingly redefine every aspect of each user’s life and work. However, It is worth pointing out that while AI can enhance the efficiency of human life, it will also raise some ethical issues. This blog will take the experience of users using smart home products and AI in the healthcare industry as an example, trying to make products smart enough by smart home manufacturers, such as putting customer information security at a risk position; and using AI technology in the health industry may increase social unemployment, thereby exploring the ethical issues that AI will bring. In addition, I will also use Apple homekit as an example to illustrate the active attempts of existing smart home product companies to protect user-information security in the existing smart home market.

 

Picture 2. Credit: CANTERBURY AI

 

AI in the smart home industry

The issue of user information security in smart home products can be used as an example of the ethical issues raised by AI. The smart home industry involves the sale of networked devices and related services that allow private end-users to automate their homes (business to customer). Devices that are connected to the Internet directly or indirectly via a so-called gateway are considered. Control, monitoring, and management of functions in a private household are their primary goals. The principle of being able to control smart home remote work is intelligent home automation requires the distant control and monitoring of individual devices, as well as, their direct contact with one another (Internet of Things). In 2003, the UK Department of Trade and Industry (DTI) pointed out the definition of the smart home, a house with a communications network that connects all of the major electrical appliances and services and allows them to be controlled, monitored, and accessed remotely (“What is a Smart Home? – Smart Home Energy”, 2021). Essentially, the smart home emerges as connected Internet of things (IoT) devices that enhance the user’s life experience. In general, smart home users can remotely control the smart devices in the home via the apps on their mobile phones.

Let’s imagine a scenario. One day morning, you wake up late. You quickly get ready to go out for work, if not, you will be late. After arriving at the company, suddenly you remember that you didn’t turn on the washing machine. However, it doesn’t matter, cause you can use the app to make the washing machine work. Moreover, the combination of smartness – data processing and connectivity and the local aspect of the use case – devices in the user’s house (Aivaliotis, Xagoraris, Kantzavelou, Hall & Maglaras, 2020) is what connects the smart home. In fact, this implies that connectivity is always available in the devices, whether it is confined to the Home Area Network or includes Internet access. In order to make the product smart enough, some smart home brands stipulates that users must use servers provided by their company in order to use the products they provide; this limits the freedom of users’ network usage rights. Due to the WiFi authentication does not require explicit user input and is sensor-free, that means user’s information security cannot be guaranteed, because user’s living habits are converted into data and transmitted to the manufacturer through the server network. For authentication, it employs an invisible radio to retrieve highly customized data such as gait patterns (JIANG, CAI, MA, YANG & LIU, 2018). That could make smart home products more ‘smart’ to serve users, because what time users usually go home, what time to cook dinner and so on are all recorded by the machine. Things like these lead to some ethical issues caused by AI.

According to a new report from the Smart Energy Consumer Collaborative (SECC), current owners of smart home devices, such as smart thermostats and appliances, are overwhelmingly satisfied with these technologies; however, upfront costs and data privacy concerns remain major barriers for those who do not yet own any smart home devices (Smart Energy Consumer Collaborative, 2021). Regardless of the current owner, all customers are concerned about data security and privacy. Some smart home users said that due to the factory setting of smart sweeping robots, only knowing the user’s house area can provide sweeping services without dead ends. When they browse search engines or short video platforms, there will be furniture advertisements popping up, and the products in these advertisements are exactly what they need. Things like that always remind them smart homes leak their private data, which is related to the advertisements. As mentioned above, the development and interconnection of various devices in the smart home will result in the generation of large amounts of personal data, which, if exposed, will pose a vulnerability threat to individuals and other interested parties. After all this information is collected, it is processed by the service provider or other third parties.

Data leaks are a problem for nearly two-thirds of all consumers (63%), with non-adopters of smart home devices being far more concerned about leaks originating from a third party than from their present power provider. Owners of smart home devices appear to have similar faith in third parties and their electrical providers; 44% are concerned about data leakage from both (“Consumers Satisfied with Smart Home Devices, But Data Privacy Concerns Linger”, 2021). In addition, there is a number of news agencies or websites have been reported ethical issues arising from smart homes. This year, Fast Company has released an article: How smart devices can spy on you—and what to do about it. This article explains how smart home products make their users feel they are ‘smart’. Only automated decision-making systems, according to manufacturers, will have access to your data. This isn’t always the case, though. Here is an example, Amazon staff listen to Alexa talks, transcribe them, and annotate them before putting them into automated decision-making systems. These kinds of news cause smart home users to worry about their information security, it will also cause the smart home market to reduce potential customers, which could be a reason for a slowdown in the development of the industry. The global smart home market was worth $48.7 billion in 2018. It is predicted to expand at a rate of 25.8% every year until it reaches $122 billion in 2022. This statistic indicates that there is an increasing number of people using smart homes, and smart home manufacturers should make protecting users’ information privacy a top priority while developing products.

 

Picture 3. Credit: Medicaldevice-network

 

AI in other industries

Whenever the word ethic is mentioned, the first thing that comes to mind is the morally bad thing in the broadest sense. However, it cannot be ignored that problems such as the rise in social unemployment caused by the promotion of AI are also classified as ethical issues. Increasing unemployment can cause a country’s economy to decline, leading to more negative social security cases. A number of people would speculate that the service industry will be the first to be hit by AI, as jobs like drivers, announcers, couriers, etc. are extremely fungible. Also, studies have shown that healthcare practitioners are also one group affected by AI. With the number of patients growing day by day, it is becoming increasingly important for healthcare systems to aim toward more simplified operations in order to save money and reduce healthcare expenses. Medical professionals spend a significant amount of time on time-consuming repetitive and administrative chores. With too many responsibilities to handle, they can easily become overwhelmed, lowering the quality of the patient experience. Intelligent Automation, AI, Machine Learning (ML), and other exponential technologies are causing huge alterations in common healthcare settings in order to improve the patient experience. However, certain barriers on social, ethical and legal issues have prevented widespread adoption of the technology, especially electronic health records (EHR), due to the complexity of the electronic model and the lack of access to investment capital for healthcare providers.

Another fiercely disputed topic is how AI will impact the judicial system. The use of artificial intelligence (AI) in predictive policing or criminal probation services can amplify existing biases and further marginalize some groups of people ((Richardson, Schultz & Crawford, 2019). While the use of AI in the criminal justice system is the most fiercely disputed topic, AI is also expected to have an impact on access to other services, thereby further excluding previously marginalized populations. As a result, AI has the potential to exacerbate another well-known ethical dilemma of ICT, namely the so-called digital divides (McSorley, 2003). Well-known digital disparities, such as those between countries, genders, and ages, and between rural and urban areas, can all be worsened by AI and the benefits it can provide. These advantages suggest that a lack of access to the underlying technology results in missed opportunities, which might be unethical.

On the other hand, Apple homekit is a series of smart home developed by Apple, which is mainly operated and used through Wi-Fi, Bluetooth and Siri. Apple homekit does not collect user privacy, and can directly intercept the data chain of collected data by using an authenticated homekit router, in a similar way to intercept data flow, to protect user privacy.

 

Conclusion

From the examples above, it is not difficult to see that AI is a means of promoting human prosperity and enhancing the common good, and bringing about progress and innovation. However, AI will also bring challenges to people’s information security or employment rate, and its use in industries such as the law also requires careful consideration. Humanity should not forget that information security is fundamentally not a scientific problem, but a social one. It’s not what we use, but how we use it that creates the difficulty. It also means that people are now generating more and more data, and this is often done without being fully aware of their actions and their consequences. Overall, the use of AI by humans requires both experimentation and thinking, and Apple’s homekit’s attempt to protect the security of user information is a good example.

 

 

REFERENCES

Andrejevic, M. (2019). Automated Media [Ebook] (1st ed., p. 115). New York: Routledge. Retrieved from https://www-taylorfrancis-com.ezproxy.library.sydney.edu.au/books/mono/10.4324/9780429242595/automated-media-mark-andrejevicv.

 

Aivaliotis, T., Xagoraris, L., Kantzavelou, I., Hall, F., & Maglaras, L. (2020). Smart Homes: Security Challenges and Privacy Concerns [Ebook] (p. 3). Cornell University. Retrieved from https://arxiv.org/pdf/2010.15394.pdf.

 

Article illustrating picture. (2021). [Image]. Retrieved from https://www.leiphone.com/category/chips/WgrMysTuGgBP7w3p.html.

 

CANTERBURY.AI. (2021). AI for Smart Homes [Image]. Retrieved from https://canterbury.ai/using-ai-for-smart-homes/.

 

Consumers Satisfied with Smart Home Devices, But Data Privacy Concerns Linger. (2021). Retrieved 6 April 2022, from https://www.prnewswire.com/news-releases/consumers-satisfied-with-smart-home-devices-but-data-privacy-concerns-linger-301321327.html.

 

JIANG, H., CAI, C., MA, X., YANG, Y., & LIU, J. (2018). Smart Home Based on WiFi Sensing: A Survey [Ebook] (p. 13322). Retrieved from https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=8307394.

 

McSorley, K. (2003). The secular salvation story of the digital divide. Ethics and Information Technology, 5(2), 75-87. Retrieved from http://ezproxy.library.usyd.edu.au/login?url=https://www.proquest.com/scholarly-journals/secular-salvation-story-digital-divide/docview/222253232/se-2?accountid=14757.

 

Richardson, R., Schultz, J., & Crawford, K. (2019). Dirty Data, Bad Predictions: How Civil Rights Violations Impact Police Data, Predictive Policing Systems, and Justice [Ebook] (pp. 1-49). New York: NYU. Retrieved from https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3333423.

 

Shutterstock. (2021). [Image]. Retrieved from https://www.medicaldevice-network.com/features/ai-in-healthcare-2021-2/.

 

Smart Energy Consumer Collaborative. (2021). Smart Home and Energy Data: What Do Consumers Want?. Smart Energy Consumer Collaborative. Retrieved from https://smartenergycc.org/smart-home-and-energy-data-what-do-consumers-want/./

 

What is a Smart Home? – Smart Home Energy. (2021). Retrieved 6 April 2022, from https://smarthomeenergy.co.uk/what-smart-home/.