About artificial intelligence

What do we think of when we think of artificial intelligence, Siri, and smartphones? Sweeping robots or self-driving cars? These machines or systems that mimic some human intelligence are called “weak AI”, and strong AI is a machine or system that actually thinks like us, whatever our brains do, strong AI is an inorganic system that does the same thing, they can reason and solve problems as humans do, and they can even have sentience and self-awareness (CrashCourse, 2016). So, what exactly is the AI we are referring to? It can be machine learning, high-level codification capabilities, or programs that perform predictive analysis by extracting diverse data. However, AI is neither artificial nor intelligent” (Kate Crawford, Atlas of AI, p. 8). Loukides & Lorica, n.d (2016) argue that defining AI is very difficult because we do not really understand human intelligence, and that continued advances in AI will help us understand more about what AI is not, rather than what it is. Although AI is increasingly becoming an integral part of current real-life, strong AI is still an imaginary field that belongs to the future of technology, and no one can say with certainty that strong AI will emerge and how it will affect areas of life in the world if it does (Kaunda, 2021). This Blog will therefore focus on the AI that is currently available, the ethical dilemmas that exist in AI, and ways to mitigate these problems.
The ability of AI

Over the last two decades, these AI programs have shown us how powerful they can be, with Deep Blue beating Garry Kasparov at chess; Watson beating the best Jeopardy champion ever; and AlphaGo beating Lee Sedol, arguably the best Go player in the world; but all these successes have been limited, with Deep Blue, Watson and AlphaGo all being highly specialized single-purpose machines that can perform well in their own domains; Watson and AlphaGo are all highly specialized single-purpose machines that excel at their domains, but Deep Blue and Watson cannot play Go, and AlphaGo cannot play Chess or Jeopardy, even at a basic level (Loukides & Lorica, n.d, 2016). These AI’s that excel in specific domains have limitations, in a way they are high but narrowly intelligent, there is general-purpose intelligence that can learn to integrate capabilities from different domains to better adapt to challenges, self-driving cars are one of this general-purpose intelligence. A self-driving car needs to combine the ability to recognize with the ability to reason, plan and remember. It needs to recognize obstacles and road signs; it also needs to reason (both to understand driving rules and to solve problems such as avoiding obstacles); it also needs to consider traffic and other patterns and plan a route from its current location to its destination; and it needs to update the solution and apply it automatically to the next drive (Loukides & Lorica, n.d, 2016). This means general AI automates decisions that used to be made by humans, and while it brings many benefits, there are also risks associated with the rapid development and adoption of the technology; perhaps most worrying is the misuse of AI by authoritarian regimes, which has the potential for unintended harm even when the public expects the application to be for an apparently good purpose; for example, privacy issues (face ID and personal information), liability issues (algorithmic accountability), mass unemployment because of automation, bias, discrimination (machines making discriminatory hiring decisions), and other ethical issues (Donahoe & Metzger, 2019).
Algorithms bias

Algorithms on the Internet are software. Algorithmic governance is therefore essentially an example of technology governance, and with the rapid development of technology, technology, and in particular software, plays an increasingly important role in the media field; digital, convergent media (internet-based media) can be considered as tier four media, and this additional technological intermediary changes the use and effect of media, with information as text, data, sound, voice, pictures and video only available through software applications to be used by users, and this content is shaped by that software (Terry, 2021). Manovich (2013) argues that digital media has no attributes of its own and that these attributes are more or less assigned by the software being used. An important component of this software-based digital media is the automatic assignment of relevance through algorithmic choices on the internet. This not only directly affects the content found by the user, but also the reputation and trust of the advertising brand (e.g. the algorithm recommends paid advertisements to the wrong target group) (Just & Latzer, 2016). In this context, we are increasingly living under algorithms, and algorithmic decisions are intervening in and even dominating more and more human social matters; for example, the news, music, videos, advertisements, social network dynamics are all personalized to users by recommendation engines; in the financial sector, algorithms can decide whether to give a user and the exact loan (Loukides & Lorica, n.d, 2016). In addition, in company management, an American investment firm started developing an AI system to manage the company several years ago, where hiring, investment, major decisions, and other company matters are managed and decided by this AI system, and the remaining employees of this company are a group of programmers who ensure the stable operation of this system (Goehring et al., 2022).
Amazon’s machine learning experts then found that their AI hiring tool had a clear bias-preferring man when screening resumes; Amazon’s team blamed the problem on the AI’s training sample, by the AI’s identification of keywords in resumes over the past decade (“executive” or “leadership”) were identified and ranked in order of importance, and since most of these job seekers were men who used some keywords at high frequencies, and there was less relevant data for women, the AI would consider female resumes that did not use keywords to be less important, leading to the conclusion that men were preferred (Goehring et al., 2022). Amazon is not the only company facing similar problems; in 2018, Quartz reported that when people searched for doctors in Google’s images, most of the images that came up were white males, a search result that correlates with the underlying social status quo, where the public perception is that men are always associated with doctors and After Google realized this, it adjusted its search algorithm and now balances the ratio of men to women considerably (Gershgorn, 2022).

This bias and discrimination is not a misunderstanding of society by AI; there may still be room for improvement in the algorithm, but the main reason for this cognitive bias is that these samples and data are characterized by human society, and the technical designers may have coded their own stereotypes, culture and ideas into the program, so it is difficult for AI, to be completely objective, fair and error-free as one would expect. This is a question of human culture and values, not AI. AI can act like a mirror to reflect what samples and data have taught it, but are the results of their learning the values we want to reflect? There is no good or bad tool, it is how humans use them that matters; ideally, algorithms need to ensure that people are treated fairly, there is no doubt about that, and as we move from big data to AI, the need to audit our algorithms and ensure that they reflect the values we support will only continue to grow (Loukides & Lorica, n.d, 2016).
What will algorithms bias bring to us?

Some people believe an algorithm is a mathematics, calculated and analyzed through data, and it should be aimed because it does not have all kinds of biases and emotions like humans; however, the training data itself may be stereotypical and discriminatory, and artificial intelligence trained using such data will naturally be biased (Osoba & Welser, n.d.). For example, COMPAS- a crime risk assessment algorithm used by some courts in the United States, has been shown to systematically discriminate against black people; If you are a black person, once you commit a crime, you are more likely to be incorrectly labeled by this system as having a high risk of committing a crime and thus be sentenced by the judge to imprisonment, or a longer sentence, even if you should have received probation (Gershgorn, 2022). Some image recognition software has previously incorrectly labeled black people as “chimpanzees” or “apes”; In March last year, Microsoft’s Twitter chatbot Tay became a sexist and racist “bad girl” as it interacted with the internet users. As a growing number of algorithmic decisions are made, more and more discrimination will occur (Gershgorn, 2022). However, algorithmic discrimination can be detrimental, and if algorithms are used in situations of personal interest such as crime assessment, credit lending, and employment assessment, because they operate at scale and may affect a group of similarly situated people or races, biased multi-algorithms can potentially cause large-scale harm (Gershgorn, 2022). Also, one bias or discrimination in an algorithmic decision may be reinforced in subsequent decisions, and deep learning is typically a ‘black box’ algorithm where even the designer may not know how the algorithm makes decisions, and it may be technically difficult to detect the presence or absence of discrimination (Osoba & Welser, n.d.).
Privacy issues

Along with algorithmic discrimination, privacy is one of the AI ethical issues of great concern; it is increasingly logical to concede basic privacy when using some smart facility or app in order to gain convenience in life or efficiency in work. “Do you allow access to your personal information”, and “Do you allow access to your microphone and camera”, we will “voluntarily” press the consent button (Saini, 2016). Also, our bank card information, home address, travel history, takeaway records, and preferred YouTube content are all recorded by Big Data; in addition, modern technologies such as surveillance cameras or mobile phone locations make it easier than ever to collect our private data (Saini, 2016). The Facebook data breach scandal suggests that this was no accident, as foreign media revealed that a data analytics firm called “They collected information about users by their browsing, liking and commenting frequency, and used it to infer their education level, sexual orientation or political views, and then used algorithms to analyze and send targeted advertising to those users (Muradzada, 2020). To some extent, data breaches are difficult to avoid, as data collection and privacy protection are inherently opposing sides, with gains for one side implying risks for the other. For example, personalized recommendations are a betrayal of self-privacy to the system, but we are getting some of the convenience of life from it. While people enjoy the convenience of AI, they face the dilemma of having their privacy used as a “resource”.
The solution?

There is a growing awareness that technology can have an impact and can be seen as an actor or agency – as something that can create meaning on its own – or as an agency that impacts individual/collective behavior and the social order (Donahoe & Metzger, 2019). In response to public concern about the ethics of AI, many stakeholders have discussed new ethical frameworks to mitigate the risks of AI and ensure its beneficial application; Stanford University launches the Institute for Human-Centered AI, and electronic engineers join academics, technologists, and social activists in launching autonomous intelligent systems theoretical global proposals (Donahoe & Metzger, 2019). There should be a clearer commitment from governments and other international stakeholders to support a fundamental approach to AI governance that is grounded in human rights, the computer experts need to work with human rights theorists, philosophers, international lawyers, psychologists, policy experts and education experts to design, apply and evaluate AI. Interdisciplinary collaboration can increase the likelihood that AI will benefit society and reduce some of the potential risks associated with knowledge blindness (Loukides & Lorica, n.d.). Besides, increasing the transparency of algorithms can reduce the mistrust or fear that people have of the unknown leading to (algorithms). Being more open with users about how AI is curated and analyzed can help users understand how algorithms affect the information they see, AND digital platforms should offer users the option to cancel AI algorithms (Just & Latzer, 2016).
Overall, AI has provided many conveniences but has also brought new challenges and ethical dilemmas to society, and we may be able to reduce algorithmic bias and privacy leaks through interdisciplinary collaboration. But should self-driving cars be set to prioritize crashing into people to keep their owners safe when the brake fails or prioritize self-destruction to keep more people safe; AI may face more such dilemmas that even humans are not sure what to do with, and it provides us more space to explore about how AI can be governed.
Reference
CrashCourse, Artificial Intelligence & Personhood: Crash Course Philosophy #23. Youtube.com. (2016). Retrieved 9 April 2022, from https://www.youtube.com/watch?v=39EdqUbj92U&t=142s.
Crawford, Kate (2021) Atlas of AI. New Haven: Yale University Press.
Donahoe, E., & Metzger, M. (2019). Artificial Intelligence and Human Rights. Journal Of Democracy, 30(2), 115-126. https://doi.org/10.1353/jod.2019.0029
Flew, T. (2022), Internet Cultures and Governance, [Lecture slides], The University of Sydney, Retrieved from https://canvas.sydney.edu.au/courses/39180/pages/issues-of-concern-datafication-automation-ai-and-algorithmic-governance?module_item_id=1458586
Flew, Terry (2021) Regulating Platforms. Cambridge: Polity, pp. 79-86.
Goehring, B., Rossi, F., & Zaharchuk, D. (2022). 1-20. Ibm.com. Retrieved 9 April 2022, from https://www.ibm.com/downloads/cas/W3KR6KZO.
Gershgorn, D. (2022). The reason why most of the images that show up when you search for “doctor” are white men. Quartz. Retrieved 9 April 2022, from https://qz.com/958666/the-reason-why-most-of-the-images-are-men-when-you-search-for-doctor/
Kaunda. (2021). Spirit Name (Ishina Lya Mupashi) and Strong Artificial Intelligence (Strong AI): A Bemba Theo-Cosmology Turn. Theology Today (Ephrata, Pa.), 77(4), 460–478. https://doi.org/10.1177/0040573620956709
Lorica, & Loukides, M. (2016). What Is Artificial Intelligence? (1st edition). O’Reilly Media, Inc.
Miller, Wolf, M. J., & Grodzinsky, F. (2016). This “Ethical Trap” Is for Roboticists, Not Robots: On the Issue of Artificial Agent Ethical Decision-Making. Science and Engineering Ethics, 23(2), 389–401. https://doi.org/10.1007/s11948-016-9785-y
Muradzada, N. (2020). An ethical analysis of the 2016 data scandal: Cambridge Analytica and Facebook. Scientific Bulletin, 3, 13-23. https://doi.org/10.54414/yzuf7796
Manovich, L (2013) Software Takes Command. New York: Bloomsbury Academic.
Osoba, & Welser, W. (2017). An intelligence in our image : the risks of bias and errors in artificial intelligence. RAND Corporation.
Just, N., & Latzer, M. (2017). Governance by algorithms: reality construction by algorithmic selection on the Internet. Media, Culture & Society, 39(2), 238–258. https://doi.org/10.1177/0163443716643157
Saini, A. (2016). Artificial Intelligence a Threat. IAES International Journal Of Artificial Intelligence (IJ-AI), 5(3), 117. https://doi.org/10.11591/ijai.v5.i3.pp117-118