Ethical Issues with Artificial Intelligence

Image from Adi Gaskell. https://www.forbes.com/sites/adigaskell/2020/10/29/can-ai-tell-us-when-to-use-ai-and-when-not-to/?sh=22d06ce15260

Introduction

There has been a growing practical application and prevalence of artificial intelligence in our daily lives that has come with the introduction of super-intelligent machine capabilities, far outstripping those of humans. The technologies (AI) have been programmed to make a wide range of moral decisions, which has generated a wide range of ethical debates, especially around how these decision-making processes can be made sufficiently transparent and who ought to be held accountable for any failures. This makes the cultivation of trust and acceptance of AI technologies a crucial component in developing ethical systems. Of particular interest has been the rising technology based on machine learning and the involvement of big data analytics, which has raised ethical concerns. Not only can the technologies beat human capabilities in simple computer gams tasks. Still, they are increasingly acquiring human intelligence that could surpass human-level achievements from facial recognition and diagnosis of illnesses to optimizing organizational and societal processes. This has led to the emergence of a range of activities geared to understanding better the technology and how to respond to ethical issues. This blog aims to understand emerging issues in technology and its governance. Therefore, I will analyze ethical issues associated with the growth and adoption of artificial intelligence with a reference to Audit Map AI technology application in Canada.

AuditMap Technologies IA

Image from Linkedin. https://www.linkedin.com/company/auditmapai/

Artificial intelligence leverages algorithms to identify and understand patterns and anomalies within data sets, which can help auditors work more efficiently to identify risk and execute other tasks at high speeds (Kroll, 2021). This has moved auditors to adopt more artificial intelligence capabilities in workplaces seeking to boost value. AuditMap is an AI-enabled technology deployed for auditing purposes that were launched with development banks in Canada to help with the evaluation of the country’s financial risk. The technology has been used in the banking industry to help recombine country information dynamically, tally results automatically, gain insights and dive deep down into the risks of the identified risks. The company has also partnered with Deloitte to help its clients deploy AI in internal audits, given the volume of operational and financial information companies are dealing with. Organizations have used the AuditMap artificial technology to help recognize emerging risks and threats that human intelligence is yet to consider. The AuditMap CEO says that the technology can identify the smallest of trends and instruct employees to make certain changes to reduce risks. However, with more AI technologies similar to AuditMap being in the market, many organizations are still reluctant to adopt the automations. Among reasons cited include the possibility of the technology obliterating the need for internal audit with expected conflicts between human reasoning and AI expected, especially where the algorithms decide to act on their own. With the AuditMap AI technology application in Canada, multiple ethical questions arise due to the many unanswered questions around the technology.

 

Ethical Issues

Unpredictable behavior

There are deep legal problems associated with robots that rely on artificial intelligence. Artificial intelligence capabilities allow machines to learn from the information they receive from the outside world, leading them to act in ways that their creators cannot or did not predict (Karliuk, 2018). Predictability is important in modern technology and legal approaches as it allows for policies and processes to anticipate changes for the right regulation response mechanisms to be adopted. With the machine learning capabilities, therefore, artificial intelligence systems can operate independently from their creators and operators, which complicates the task of taking responsibility for the systems raising ethical issues. Ouchchy et al. (2020) argued that this makes it challenging to make these technologies transparent or hold them accountable. These technology capabilities are associated with ethical concerns of predictability and the ability of machines to act independently as it eliminates the possibility of being legally responsible. AuditMap may therefore engage in the evaluation of data beyond the specifications of the auditing firms instructing firms to go beyond the scope of assignments which may raise ethical issues. Moreover, while legal processes focus on creating owners of machines and systems responsible in case of ethical misconduct, the ability of the machines to learn on their own and act without the control of their creators makes it ethically impossible to assign responsibility for future actions (Kuklinska, 2021). The existing regulatory and legal norms are thus hardly applicable in artificial technology and create a gap in enforcement until better approaches are developed, raising ethical concerns for the technology.  For instance, technologies have been regulated as objects subject to copyright or as property of the owners. However, complications arise with the ability of the systems to act autonomously, against the will of their creators or owners. Kroll (2021) says that while applications such as AuditMap are created to help, there is a lack of standards for the development of AI which may result in regulation ambiguity in implementation and holding of such companies responsible when mishaps occur in data handling. Alternatives such as applying animal laws to artificially intelligent machines have been proposed since they act autonomously too but have been subject to the ethical dilemma of connecting them to owners for accountability (Karliuk, 2018). Secondly, the laws are unacceptable within the criminal law framework and will thus raise ethical laws in an application until better systems are developed to govern the regulation of the technology. We continuously ask machines to make more critical decisions as we use more artificial intelligence. With the growing autonomy and the ability of machines to isolate human control, ethical concerns for machines acting on their welfare are alive. Auditors may for instance wish to be restricted to handling data from a current financial period with AI application going beyond to past years and showing misconduct from the past which creates an ethical problem for companies in going beyond the scope of auditing assignments. It suffices to show that machine learning systems can inadvertently or intentionally result in the reproduction of biases in humans and other processes as they often learn from existing trends and models in the real world. For instance, the technologies can replicate biases such as gender issues in recruitment through machine learning. The emerging question therefore is whether these audit technologies will replicate these biases on auditing individuals whether intentionally or not.

 

Unemployment Concerns

The attributes of data analytics and machine learning in artificially intelligent machines translate to increasing automation. However, the hierarchy of labor is primarily concerned with the extent of automation and the possibility of replacing human labour, raising the question of what becomes of them when a job ends (Sarangi and Shara, 2018). As the continued pursuit of artificial intelligence for job automation ranges, the long-run impact could be creating room for human labor to assume more complex roles and move from physical roles that have been the norm in the past. However, with continued machine learning, the technologies could take over the jobs previously occupied by humans, especially in labor-intensive industries (Su, 2018). For AuditMap for instance, the worry is that the continued automation of the audit process will increasingly replace human labor in auditing roles and especially in internal processes which renders employees jobless raising concerns. Moreover, it is also associated with the threat of new skills requirements currently not possess by most auditors essentially raising the concern of replacing more seasoned but not tech-savvy auditors. This is where the question of how displaced people will spend time and earn a living emerges, contrasting the benefits of the technology and its impact on creating massive unemployment. Across the globe, people depend on selling their time to generate income to sustain themselves and their families, which would be impacted significantly with continued automation (Ouchchy et al., 2020). Hoping that all humans will transition to non-labor activity and learning new ways to engage with the community might be too much to ask for. Gaps for people, such as semi-skilled and non-skilled employees, would continue to generate significant ethical concerns in the labor market. Technologies such as the auditing AI AuditMap will therefore likely render a huge chunk of employees unemployed and currently held skills obsolete raising ethical concerns of how such as group will make a living.

 

Data Protection and Privacy Concerns

A critical, frequent, and primary concern of artificially intelligent machines lies in privacy and data protection concerns. According to Buttarelli (2018) and for purposes of ethics, the critical privacy concerns boil down to informational privacy and data protection which can be understood as a gateway to the protection of information privacy. Thus, machine learning concerns in AI raise several risks in data protection. The question of my mind is whether audit technologies, often interacting with huge data on clients, suppliers, creditors, debtors, and employee can restrict is pattern mining capabilities to set out parameters only. Wachter and Mittelstadt (2019) indicated that machine learning often requires large sets of data, often on humans, for training purposes where access to the data sets raises ethical concerns on data protection. Moreover, artificial intelligence is associated with the ability to learn and remember patterns which raises privacy risks even in instances where no direct access to personal information is concerned. For instance, artificially intelligent machines can learn and remember passwords that act as a primary safeguard to information integrity, thus resulting in information privacy violations. As indicated by Jernigan and Mistree (2009) study, machines often learn to generate insights, such as identifying sexual orientations from social media activity, that raise ethical privacy concerns. This includes the use of artificial intelligence to allow for the re-identification of anonymized personal data, raising privacy and data protection concerns. Stahl (2021) has associated this with the possible generation of unique data sets that are less widely used today or unknown, such as personal emotional data, which further complicates the ethics of AI use. Moreover, these developments can be linked to new legislation gaps in data protection currently unaccounted for, which is a further ethical concern. Data protection is also linked to data security as the AI systems are also subject to new types of security vulnerabilities that, according to Jagielski et al. (2018), include model poisoning attacks. AuditMap as an AI technology can therefore be attacked, as a software with alterations in its basic DNA components, or models, resulting in synthesis of wrong results for auditors and organization. This is likely to generate more regulatory questions around the deployment of the technologies. Therefore, privacy and data protection concerns raise questions of reliability in audit technology is ethically concerning with the opacity of machine learning and unpredictability, making traditional deterministic testing machines very inapplicable. According to Kazim and Koshiyama (2021), machine learning outputs depend on the quality of data training which might be challenging to ascertain, while data breaches can also mess up its integrity. Moreover, these systems are also being exploited and used by individuals to generate data through surveillance. Governments are majorly liable for the use of artificially enabled technologies to spy on citizens, which is both a violation of privacy and a threat to data protection (Feldstein, 2019). Governments targeting individual businesses or business owners can therefore gain entry into technologies such as the AuditMap AI algorithm to spy on business transactions and use them for victimization on organizations or individual company representatives. Thus, in my opinion, this raises a question of how such technologies can be tamed to focus only on those elements of the auditing process that it has been assigned and avoid reaching to, or analyzed other data sets without the authorization of rights owners.

 

Conclusion

Artificial intelligence is a welcome technology that is helping humans to grow in leaps and bounds, especially in generating solutions for previously complex processes such as in auditing duties. Coupled with machine learning and data analytics, the technology has created room for machines to act independently of humans, causing ethical concerns; reliability and predictability of the systems have thus become a concern for the systems. The audit technology has thus raised concerns that include complications in the assignment of responsibility and enhancing accountability. There are unemployment, data privacy and protection concerns, the perpetuation of biases and wealth distribution inequalities affecting the technologies.

 

References

Buttarelli, G. (2018, October). Choose humanity: putting dignity back into digital. In Opening Speech of Debating Ethics Public Session of the 40th Edition of the International Conference of Data Protection Commissioners (Vol. 24, pp. 2018-10).

Feldstein, S. (2019). The global expansion of AI surveillance (Vol. 17). Washington, DC: Carnegie Endowment for International Peace.

Jagielski, M., Oprea, A., Biggio, B., Liu, C., Nita-Rotaru, C. and Li, B., (2018). May. Manipulating machine learning: Poisoning attacks and countermeasures for regression learning. In 2018 IEEE Symposium on Security and Privacy (SP) (pp. 19-35). IEEE.

Jernigan, C., & Mistree, B. F. (2009). Gaydar: Facebook friendships expose sexual orientation. First Monday.

Karliuk, M. (2018). Ethical and Legal Issues in Artificial Intelligence. International and Social Impacts of Artificial Intelligence Technologies, Working Paper, (44).

Kazim, E., & Koshiyama, A. (2021). The interrelation between data and AI ethics in the context of impact assessments. AI and Ethics1(3), 219-225.

Kroll, K., (2021). Using Artificial Intelligence in Internal Audit: The Future is Now.

Kuklińska, L. A. (2021). Establishing liability for the actions of Artificial Intelligence. Recommendation for the European Commission.

Ouchchy, L., Coin, A., & Dubljević, V. (2020). AI in the headlines: the portrayal of the ethical issues of artificial intelligence in the media. AI & SOCIETY35(4), 927-936.

Sarangi, S., & Sharma, P. (2018). Artificial intelligence: evolution, ethics and public policy. Routledge India.

Stahl, B. C. (2021). Ethical issues of ai. In Artificial Intelligence for a Better Future (pp. 35-53). Springer, Cham.

Su, G. (2018). Unemployment in the AI Age. AI Matters3(4), 35-43.

Wachter, S., & Mittelstadt, B. (2019). A right to reasonable inferences: re-thinking data protection law in the age of big data and AI. Colum. Bus. L. Rev., 494.