An unexpected outcome: How AI technologies for drug research might be turned into a biochemical weapon designer

AI technologies has been used in medical discovery (Strathern, 2020).

Introduction

About ten years ago, an American science fiction crime drama called Person of Interest tells a story about a talented software engineer and the artificial intelligent he created. The AI, which named The Machine, is able to identify potential victim who might in life danger through monitoring all electronic communications and surveillance video feeds. There is another AI called Samaritan shares a similar program with The Machine and he also could detect potential criminals. While instead of saving people’s live, Samaritan decided to eliminate all the “considered” human offenders even they haven’t done anything wrong. In one episode Samaritan has a conversation with The Machine, he said “Now they(human) would all believe in one thing, me, for I am a god.” However, The Machine replied “I have come to learn this a little difference, between gods and monsters.”

Now on March 2022, a report from the American medica company Collaborations Pharmaceuticals, Inc. has coincidently brought the fiction plot of The Machine and Samaritan into reality. A generative AI model which supposed to research and penalize predicted toxicity had flipped to generate 40,000 biochemical weapons in six hours (Strathern, 2022). This unexpected outcome has been noticed by the organization’s researchers. They have presented their findings at the Spiez Laboratory, one of five labs in the world that is permanently certified by the Organization for the Prohibition of Chemical Weapons (OPCW), to highlight the significance of ethical consideration of utilising complex AI in certain area such as medical field.

The popular media such as fiction novels or films usually give us a stereotype that human-like emotion of AI and their interaction with human are the major part of ethics toward AI technologies, while the case of Collaborations Pharmaceuticals’ AI model has provided a different view toward this topic. Therefore, this blog is going to give a brief introduction of artificial intelligent technologies and related ethical studies through analyzing the exampled case.

AI technologies and its ethical questions

Artificial intelligent, shorten as AI, is definitely not a new word to us. From the golden robot 3PO in Star Wars to the female AI program Joi in Blade Runner 2049, we have seen lots of narratives indicates the image of AI. But what is AI exactually? According to Mainzer (2020), the traditional definition of AI is an artificial program function as a simulation of intelligent human thinking and acting. While Ashri (2019) suggests a more detailed concept through distinguishing between artificial general intelligence (or strong AI) and domain-specific (or weak AI). Strong AI refers to the effort to create machines which able to tackle any problem by applying their skills. This form of AI is like humans, it can examine a situation and make best use of the resources at hand to achieve its objectives. For example, just like we usually see in the movies about future life, an AI received an order to make a cup of tea for its human master. Then it required to know where is the kitchen, how to heat up water, how to palace the tea bug and which cup should be used to contain the tea. In another word, strong AI has the ability to “think” like human. While the weak or narrow AI refers to a kind of machine that able to solve problems in well-defined domains. the goal of these AI machines is to solve delimited problems and demonstrate their value early and clearly. By adjusting with computer technologies such as big data and algorithm, we now can build systems that able to solve problems without us having to explicitly articulate all the rules (Ashri, 2019). One of the common examples is the Siri of our mobile phones, we can simply ask it a question and it would automatically provide answer by searching or collecting massive online data in a second.

Müller-Wiegand (2021) argues that complex and dynamic problems can only be understood and solved by a correspondingly complex and dynamic system. Nowadays we are living in highly networked world with incalculable changes that hard to predict. Relying on the capability of human brain alone is no longer enough, analog and digital networking technologies are suitable for higher complexity and could provide a condition of evolution, creativity, and intelligence. Therefore, Malone (2018, as cited in Müller-Wiegand’s, 2021) indicates a significant question: How can people and computers be connected in a way that together they act more intelligently than any person, group, or computer has ever done before? The answer is the human-machine systems. Through a combination of human intelligent and AI (figure 1), humans can contribute general intelligence and special abilities machines lack.

Figure 1. Learning loops with human–machine interaction (Müller-Wiegand’s (2021) presentation base on Malone’s (2018) discussion).

As we mentioned before, AI technologies can provide the special abilities that humans do not have. As a result, by creating cyber-human learning loops, a group of people and computers can act more intelligently than any person, group, or computer has ever done before. Actually, different kind of cyber- human loops have already existed in our real life. For instance, biomedical research has become a data-centric activity enabled by novel material and experimental practices linked to data collection, distribution, and use in the last decade. Moreover, artificial intelligence not only being employed in drug discovery to screen libraries of potentially therapeutic molecules, but also applied to automate searches in the biomedical literature through natural language processing techniques, to predict experimental dosage and translational medicine development (Vayena & Blasimme, 2020).

After clarified the definition of AI and its advantages, let’s move to the next part. In the conversation between The Machine and Samaritan, there is another interesting dialogue. Samaritan said human wants to kill him, he asked The Machine what makes her more deserving a “life” than him. The Machine answered “I always built with something you’re not. A moral code.” The ethical discussion around AI technologies has always been a hot topic and lots of academics have expressed their different views. Liao (2020) summarizes three major reasons that why people should pay attention on ethics of AI. Firstly, AI learning needs a large amount of data to function well. Take the recommend algorithm as a typical example, media platform such as YouTube and Netflix would recommend same topic of videos that audience just finishing watch. The algorithm, in order to fine-tune themselves and achieve great predictive power, they would have access to a vast amount of data such as the video records for each subscriber on that media platform. Therefore, it will incentivize companies and organizations to harvest or buy data, including sensitive or personal data, which might involve violating an individual’s right to privacy. Secondly, since AI learning relay heavily on data and only as good as the data from which it learns, it is possible that even a well-designed algorithm would fail into wrong predictions if it is trained on inadequate or inaccurate data. While thirdly, if an AI algorithm itself has not been proper designed, it will also produce bad prediction even it receives adequate and accurate data.

As a result, many countries have published regulations related to ethics of AI technologies to reduce the potential risk they might cause. Most of such narratives are focus on Human-centred values, referring that AI systems need to aligned with human values. Machines should serve humans and human rights risks are required to be carefully considered by commercial businesses, AI design organizations and governments.

The Case of medical discover AI

Though we have the regulations about ethics of AI and we try to follow them, the machine can find a bug and surprise, or shock us in an unexpected way. The case of Collaborations Pharmaceuticals, Inc. and its medical discover AI is a good example to show how “clever” such technologies could be.

The organization recently a published computational machine learning models for toxicity prediction. In the beginning the staffs have an optimistic expectation. The AI can analysis the data much faster than human, so it can speed up the process on medical molecules discovery and save time for scientists to test the new funds. Sounds cool, right? They trained the AI with a collection of primarily drug-like molecules and their bioactivities from a public database. Since the original purpose of this AI is using it to avoiding toxicity and enabling scientists to better virtually screen molecules before finally confirming their toxicity through in vitro testing, the research group chose to drive the AI towards compounds such as the nerve agent VX. It is one of the most toxic chemical warfare agents developed during the twentieth century and even a tiny amount is lethal to human. Then around six hour later the group find out their AI has generated 40,000 molecules, including not only VX but also but also many other known chemical warfare agents that group identified and confirmed through public chemistry databases. More than these, the AI generated many new molecules that looked equal to existed dangerous chemicals and were predicted to be more toxic than publicly known chemical warfare agents. A more creep thing is, those new toxic nerve agents were even not in the dataset the group used to train the AI (Urbina et al., 2022).

The AI that been employed in this case could be considered as a weak AI. It doesn’t have the ability to run complex “logic inference” likes AlphaGo. It just follows the program that the group gave—using resources from database, generate medical molecules and analysis their toxicity. While somehow it seems to inverting part of the process and the innocuous AI became a generator of likely deadly molecules than a help tool of medicine discovery. The research group didn’t assess the virtual molecules or physically synthesize any of the molecules, while they did share their worries in the report. And those worries are exactly what we also consider. Firstly, it is undeniable that the better we can predict toxicity, the better we can steer AI technologies to design new molecules in a region of chemical space populated by predominantly lethal molecules (Urbina et al., 2022). Secondly, there are hundreds of commercial companies over the world offering chemical synthesis while regulations in this area is poor, the synthesis of new, extremely toxic agents that generated by AI could potentially be used as chemical weapons. Moreover, this time the research group are all good people and they stop a potential disaster immediately, but what about next time? What if someone with bad intentions find out a similar “bug”?

Conclusion

Bonnefon (2020) summarizes that AI technologies has extending to become a dominating part of people and algorithms are making decisions for people. We now living in a time that the tentacles of AI are almost everywhere, it appears on our social medias, our food order list, even might influence our choice in intimate relationship as lots of dating app also use machine learning algorithm. They root so deep in our life that one wrongful flip in the ethics consideration might cause serious consequence. The case of medical research AI of Collaborations Pharmaceuticals company is a wake-up call. As Crawford (2021) indicates, artificial intelligence is not an objective, universal or neutral computational technique, it needs human direction to makes determinations. The foundation of AI technologies is shaped by humans and human should be the main body to determine what they do and how they do. We might not be able to predict what kind of problem that AI would bring specifically, but we can improve our regulations and fill the blank of ethics studies in this field with efforts, just like the last episode of Person of Interest, The Machine finally destroyed Samaritan.

 

References

Ashri. (2019). What Is AI? In The AI-Powered Workplace (pp. 15–29). Apress. https://doi.org/10.1007/978-1-4842-5476-9_2

Mainzer. (2020). Artificial intelligence – When do machines take over? (1st ed. 2020.). Springer Berlin Heidelberg. https://doi.org/10.1007/978-3-662-59717-0

Urbina, Lentzos, F., Invernizzi, C., & Ekins, S. (2022). Dual use of artificial-intelligence-powered drug discovery. Nature Machine Intelligence. https://doi.org/10.1038/s42256-022-00465-9

Strathern, F. (2022, March 23). AI drug research algorithm flipped to invent 40,000 biochemical weapons. AI News. Retrieved April 5, 2022, from https://artificialintelligence-news.com/2022/03/23/ai-machine-learning-biochemical-weapons/

Liao. (2020). Ethics of artificial intelligence (Liao, Ed.). Oxford University Press.

Müller-Wiegand, M. (2021). Value-Based Corporate Management and Integral Intelligence. In: Vieweg, S.H. (eds) AI for the Good. Management for Professionals. Springer, Cham. https://doi-org.ezproxy.library.sydney.edu.au/10.1007/978-3-030-66913-3_5

Malone, T. W. (2018). Superminds. The surprising power of people and computers thinking together. Little, Brown and Company

Vayena, & Blasimme, A. (2020). The Ethics of AI in Biomedical Research, Patient Care, and Public Health. In The Oxford Handbook of Ethics of AI. Oxford University Press. https://doi.org/10.1093/oxfordhb/9780190067397.013.45

Crawford, K. (2021). Conclusion Power. In Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence (pp. 211-228). New Haven: Yale University Press. https://doi-org.ezproxy.library.sydney.edu.au/10.12987/9780300252392-008

Bonnefon, J., Shariff, A., & Rahwan, I. (2020). The Moral Psychology of AI and the Ethical Opt-Out Problem. In Ethics of artificial intelligence (Liao, Ed.). Oxford University Press. DOI:10.1093/oso/9780190905033.003.0004