Artificial Intelligence for Human Life: A Critical Opinion from Medical Bioethics Perspective – Part I

Since the end of the second artificial intelligence (AI) winter in 1993, AI popularity has been skyrocketing. Catalyzed by the rapid advancements of supporting technologies such as computational power and storage capacity, AI has been widely developed for various sectors of human lives. Although there is a common global consensus stating that the development of AI should consider ethical principles, until when this paper was written, the debates on how we can define an AI as ‘ethical’, how the AI ethical guidelines should be formulated, who has the rights to determine the ethical aspects of AI, and how can we reinforce the guidelines to the AI developer/operator is still going on. In this article, we summarized the research and studies related to AI development from the ethical perspective that have been conducted within the last 10 years. We discussed several cases of AI development misconduct and ‘unethical’ AI, such as biased algorithms and privacy breaches during data gathering.


Introduction and Background
Since the end of the second artificial intelligence (AI) winter in 1993, AI popularity has been skyrocketing [1]. Catalyzed by the rapid advancements of supporting technologies such as computational power and storage capacity, AI has been widely developed for various sectors of human lives. Nowadays, we can see AI implementations in pretty much every aspect of our life. From computational devices, cameras, smart vehicles, smart homes, to healthcare systems [2].
Although there is a common global consensus stating that the development of AI should consider ethical principles, until when this paper was written, the debates on how we can define an AI as 'ethical', how the AI ethical guidelines should be formulated, who has the rights to determine the ethical aspects of AI, and how can we reinforce the guidelines to the AI developer/operator is still going on. Unlike human beings who combine intellectual and feeling in their decision-making process, AI makes their decision only by following lines of code and algorithms. Therefore, the ethical aspect of AI decision-making purely relies on how the programmers designed the lines of code that make up the AI itself.
Furthermore, the ethical principles of AI have sparked a long debate from academia, industry, and global institutions [3]. This debate mainly focuses on how the values and principles of human lives can be properly reflected in AI development and implementation such that the objectives of the AI itself (i.e., ease human lives) can be achieved without sacrificing any other aspects [4,5]. Previous studies have raised concerns about how AI might make people lose their jobs [6], reduce the opportunity of the jobseeker [7], and increase economic and social inequality [8,9]. In addition, AI can be exploited by irresponsible people so that AI might cause harm instead of providing benefits [10,11]. Bias and fairness factors are other factors that have the potential to cause problems in AI development. Plenty of former studies have discussed and analyzed the importance of ethical aspects in AI development [3,12,13,14,15]. Then, studies [11,16,17,10] discussed how a poor AI design potentially leads to unintended adverse consequences such as discrimination and algorithm bias.
Numerous international institutions have responded to these concerns by forming AIethical committees that are comprised of experts to formulate ethical guidelines for AI developments and applications. For example, the European Commission, through the High-Level Expert Group on Artificial Intelligence (AI HLEG), presented the first draft of Ethics Guidelines for Trustworthy AI in December 2018 [18]. These guidelines listed seven key requirements for building trustworthy and ethical AI. These requirements are derived from fundamental rights and ethical perspectives. The seven key requirements include human agency and oversight; technical robustness and safety; privacy and data governance; transparency; diversity, non-discrimination, and fairness; societal and environmental wellbeing; and accountability. Similar principles have been proposed by the Organization for Economic Co-operation and Development (OECD): Inclusive growth, sustainable development, and well-being; human-centered values and fairness; transparency and explainability; robustness, security, and safety; and accountability [19].
Furthermore, major technology companies such as Google [20] and Microsoft [21,22] also published their own perspectives on AI principles. In the United Nations Educational, Scientific and Cultural Organization (UNESCO) General Conference in November 2021, UNESCO's 193 member countries declared the adoption of the "Recommendation on the Ethics of AI" [23]. This recommendation is the world's very first global standard-setting instrument on the AI-Ethics matter. The recommendation is formulated to protect and promote human rights and dignity [23]. Moreover, this instrument is expected to be an ethical guiding compass in the digital world to build strong respect for the rule of law. In 2022, UNESCO officially published this 43-pages long recommendation that can be publicly accessed through their own website [23]. These ethical guidelines variations will be further discussed in the main body of this manuscript.
At this point, the ethical problems of AI seem to be solved by those guidelines. However, it is apparent that typical ethics problems (or AI ethics in this case) are lacking reinforcement mechanisms. It is rather challenging to strictly follow and obey the normative claims in the guidelines. Hence, these ethical guidelines appeared to have no impact on human decisionmaking, let alone in the field of artificial intelligence. So far, the principles and rules for enforcing ethics are still considered vague and do not have serious consequences. Although violations of ethical principles can result in social sanctions (for individual actors) and reputation losses (for organization actors), these consequences tend to be less harmful. In addition, ethical principles are usually subjective and normative. Hence, the perceptions of norms in one society may differ from the perceptions of other societies. This makes ethical principles have many loopholes that can be exploited by irresponsible parties.
As aforementioned, much global commercial technology (or AI) companies have formulated their own AI ethical guidelines. If not monitored closely and transparently by independent parties, it is very possible for these companies to take advantage of loopholes in making certain AI ethics guidelines in a way that is profitable and poses no imminent threat to their business. In addition, an independent party is needed to monitor, enforce, and investigate the implementation of AI ethics guidelines. The current AI industry's ethics guidelines serve to propose to legislative bodies that internal self-governance in both academia and industry is adequate, implying that no specific laws are required to mitigate potential technological threats and eliminate abuse scenarios [24]. If the process of monitoring and enforcing rules is carried out by an internal team, it is possible that an abuse of law scenario might occur. In summary, legal-bonded guidelines and strict law reinforcement are needed in the realm of AI ethics so that the AI guidelines framework does not feel normative, superficial, and vague.
In this article, we summarized the research and studies related to AI development from the ethical perspective that has been conducted within the last ten years. We discussed several cases of AI development misconduct and 'unethical' AI, such as biased algorithms and privacy breaches during data gathering. AI has brought many ethical problems. Actually, AI also promises a more objective and less-biased decision-maker. This argumentation is also presented in this manuscript. Actually, we split the discussion of the topic "Artificial Intelligence for Human Life" into two chapters due to the page limitations. This article is the first of two chapters. In the second article, we will discuss the applications of AI in the field of medical and healthcare sectors. We will also present the possible ethical issues in AI applications in mentioned sectors. In addition, we will present the basic principles of AI ethics and the existing AI guidelines, especially for the fields of medicine and healthcare, followed by a summary and comparison of the ethical guidelines proposed by various bodies around the world, especially in those two sectors.

Material And Methods
The main objective of this study is to analyze the development and application of AI from an ethical perspective. Additionally, we explored the basic principles of AI ethics and the existing AI guidelines, especially for the fields of medicine and healthcare. For this critical narrative review, we conducted a thorough search of Scopus, PubMed, and Google Scholar for literature addressing precision medicine. We also selected publications that discussed common ethical issues in the application of AI. Artificial intelligence, machine learning, medicine, and ethics were the terms employed in the search strategy. The search was only conducted in English and Indonesian. The year of publication or study was not limited. More than 50 published works were gathered as a result. In this publication, all forms of research studies were taken into account. Unpublished data, articles that had not yet been accepted, and technical notes were eliminated.

Cases of Unethical AI
Albeit AI offers tremendous features to ease our daily activities, it is now well known that a sloppy AI implementation without careful ethical consideration could lead to a very bad situation. We define this situation as 'Unethical AI' implementations. In this subsection, we summarized several cases of unethical AI applications.

A. Unethical Algorithms
Artificial intelligence is comprised of a set of decision-making algorithms. While we see the rapid advancement of AI algorithms in the last decades, there are some known cases that use unethical algorithms. In 2020, an independent investigation revealed that there is one algorithm in social media Instagram that prioritized human photos with more exposed skin [25]. This algorithm can directly induce pressure on the content creator to expose more skin in order to reach a wider audience. The investigation analyzed more than 2000 photos and found that a computer algorithm detected 21% of the photos containing bare-chested men or women in bikini/underwear. Posting images on Instagram without showing body parts significantly reduces organic reach. This algorithm might cause negative impacts not only on content creators but also on young generations. Let us write another example. Using machine learning, researchers in [26] created an algorithm that generated images of faces from speech recordings. The faces are supposed to match the speakers' sex, age, and ethnicity, which a casual listener might guess. Actually, this initiative could bring some useful applications (e.g., helping the police to identify criminal's face). Nevertheless, this work sparks huge controversies. Some argue that there is no way artificial intelligence could generate someone's face just based on the voice, while others are afraid that this technology (if it works) might lead to a serious privacy infringement. Despite that, many people still believe that there is no problem with this work.
There are other AI applications with unintended biased algorithms. Although unintended, these cases show that depending on the data training and algorithm design, an AI can be biased. In 2015, a user revealed that Google's Photos app labeled images of black people as "gorillas" [27]. The company then apologized and provided a "quick fix", which later found out that this 'quick fix' was simply to censor image searches and tags for the word "gorilla." [28]. In September 2020, a Twitter user noticed that when he posted two images (himself and his colleague), Twitter's preview consistently recognized the white man over the black man regardless of which image was added to the tweet first [29]. In May 2021, Twitter acknowledged this bias and revealed that in 50-50 chance of demographic parity, their tests found that there is a 7% difference favoring white women over black women, a 2% difference in favor of white men over black men, a 4% difference in favor of white people over black people of both male and female, and an 8% difference favoring women over men [30]. In December 2016, a New Zealand passport robot recognized an Asian man's eyes were closed despite it was actually open [31]. Many believe that these kinds of biases happened because of a lack of diversity in data training and data testing. As a simple illustration, an AI-assisted image detection system was trained with numerous human images (with both open and closed eyes). However, among the human images, there is only a small portion (or even no portion at all) of Asian humans. It is known that Asian humans tend to have smaller eyes compared to Caucasians. However, since the AI was not trained with human images with smaller eyes, naturally, AI-assisted image detection systems will face difficulty in distinguishing whether an Asian person is opening or closing their eyes. Because of these bias, a giant tech company IBM decided to abandon 'biased' facial recognition tech in June 2020 [32]. Although there is no reported causality caused by the unethical and biased algorithms, these issue needs to be seriously discussed. Especially when it comes to AI-assisted medical and healthcare systems where a small mistake may lead to somebody's death.

B. Unethical Data Gathering
The most well-known concern in the world of technology and artificial intelligence right now may be unethical data collection. Let us present a few examples of unethical data collection in this area. Concerns about how major tech firms are exploiting data to monitor users for profit were raised this year when Google was accused of failing to disclose that it continues to collect location data on services like search, maps, and applications that need a Wi-Fi connection and mobile phone towers [33].
According to a report in November 2022, Google would pay $391.5 million to resolve claims that it illegally acquired user location data after tricking consumers into believing their information was no longer collected when they disabled location-tracking services [34]. Texas sued Google in October 2022 for the alleged unauthorized use of user voices and faces. The complaint stated that Google obtained biometric information "from countless Texans" and used voices and faces for commercial purposes [35]. Millions of Facebook users' personal information was illegally obtained by the British consulting company Cambridge Analytica in the 2010s, mostly for the purpose of political advertisement [36]. The information was gathered via the "This Is Your Digital Life" app, created in 2013 by data scientist Aleksandr Kogan and his business Global Science Research [37]. The app contained a series of questionnaires to create psychological profiles of users, and it used Facebook's Open Graph network to acquire the personal information of the user's Facebook friends. Up to 87 million Facebook accounts' data were collected by the app [38].
In addition, the US government allegedly employed face recognition to identify demonstrators during demonstrations brought on by George Floyd's death [39]. Recently, a Black man in Detroit was wrongfully detained for a crime he did not commit as a result of the P -ISSN 2961-9106 • E-ISSN 2961-8681 usage of face recognition software [40]. The usage of AI, especially face recognition, may be crucial in China's social credit score system [41], which many believe to be unethical. However, it is impossible to predict how this system may be employed specifically.
Further, ClearView AI's facial recognition technology takes images from internet videos and compiles publicly accessible photos from social media and other websites. Access to Clearview AI's image database is marketed, and it includes a search engine where a subject may be searched using a photo. The French Data Protection Authority (CNIL) instructed Clearview AI in November 2021 to stop gathering and using data from French residents without legitimate legal grounds, to facilitate the fulfillment of data subjects' rights, and to abide by such demands [42].
Smart speakers are just another example of contentious data collection in Ai technologies. Some speakers that are controlled by very sophisticated voice assistants like Alexa, Google Assistant, Siri, and Cortana can accidentally awaken up to 19 times per day, according to a study by academics at Northeastern University and Imperial College London [43]. But these activations weren't constant. In both the emerging academic literature and the popular conversation regarding smart speakers, privacy is still a major concern [44].
Nowadays, given the fierce competition in the e-commerce business, any data relating to customer behavior is valuable. Digital marketing professionals observe user activity to get information about their client's preferences and purchasing patterns [45]. If a company keeps tabs on its consumers' online activities after they leave its website, it may be unethical. The company has no right to keep track of anything the consumer looks for on Google or other visits next.

C. Unethical Applications
Let's talk about the case of deep fake development, which might be applied unethically by a negligent user. Techniques for manipulating audio, video, and images have rapidly advanced as a result of recent discoveries in artificial intelligence (AI). The availability of inexpensive cloud computing, open-source AI techniques, and an abundance of data has brought about the ideal conditions for the mass production of deep fakes for dissemination on social media. Deepfakes are harmful and can negatively affect society in general, both intentionally and unintentionally, by fabricating information [46]. Deepfakes, which are not just fake but also incredibly lifelike, might exacerbate the post-truth dilemma since they deceive our most basic auditory and visual senses. It is immoral to create and improve fictitious digital identities for fraud, sabotage, or infiltration reasons [47].
Numerous innovative methods for potential discovery and assessment have been generated as a result of digital technologies and AI advancements. Many of these technologies promise to make it easier and more affordable nowadays for businesses to locate the best candidates for open positions and filter out those who are unqualified for them. But because data sets might contain a variety of biases, there is a higher chance that AI that relies on historical data would fall short of its objectives. AI may cause discrimination, particularly when choosing employment prospects [48]. Furthermore, if the data was gathered by abusing ethical codes, the input utilized by an AI solution can violate the right to privacy of individuals because AI may access applicants' personal information through tools like face recognition software.
Similarly, these issues can also happen in other decision-making systems involving candidate selection, such as internship students and university admissions.
Another example of possible unethical AI applications is in bank loan selections.
Recently, Consumer Financial Protection Bureau chief Rohit Chopra issued an alert that the use of artificial intelligence in loan approvals might result in discrimination that is against the law [49]. Forbes, in September 2021, revealed a shocking fact about AI biases in mortgage applications [50]. They claimed in their study that artificial intelligence and its embedded bias appear to be a persistent contributing element in the slow approval of loans for racial minorities. According to Markup's analysis, lenders were more likely to reject house loans for individuals of color than for white borrowers with the same financial circumstances.
Particularly, 80% of Black candidates, 40% of Latino applicants, and 70% of Native American applicants are more likely to be turned down [50].
In the academic field, AI can also potentially be used by irresponsible users for unethical applications. One famous unethical misconduct in the academic field is what we socalled papermill. An AI-assisted or even AI-fully automated paper generator that can be used to fabricate random illogical low-quality academic papers. Actually, if we pay attention to these AI-generated papers, it is easy for us to detect and distinguish them. Since the current papermills are still far from perfect, we can easily find anomalies in such AI-generated papers, ranging from low-quality sentences, grammatically error sentences, and illogical claims to completely meaningless random sentences. Unfortunately, there are still many papers that passed through the publisher systems and ended up being published on the journal's website.
Indeed, most of these publishers are already previously considered 'predatory' publishers.
With the rapid advancement of AI development, it is not impossible that in the future, paper mill-generated articles will be harder and harder to be detected and eventually can no longer be identified.

Discussion and Conclusion
Despite the fact that AI has brought many advantages to our lives, as presented in the previous sections, it is evident that AI has also brought numerous ethical challenges. It is now well recognized that things could go really wrong if AI is implemented without due regard and consideration for its potentially harmful impacts on individuals, on specific communities, and on society as a whole (including, for example, bias and discrimination, injustice, privacy infringements, increase in surveillance, loss of autonomy, overdependency on technology, etc.).
To avoid the mentioned issues, strong and strict guidelines regarding AI development need to be formed. In addition, commitments from companies, as well as users, are required to tackle the AI applications' misconduct.
Nevertheless, AI has also brought a potential solution to diminish ethical and racial issues. As fair as it can be, a decision made by a human being still often has inevitable subjectivity. However, unlike a human being, AI has no subjectivity except for the way the AI itself was built. In other words, if an AI was appropriately built without embedded biases, subjectivities, and fairness issues, then we can expect an AI-based fair and objective decisionmaker. For instance, SHRM's Dave Zielinski [57] has described how AI may assist HR manager in preventing harassment, discrimination, and other ethical problems. He asserts that the development of artificial intelligence, machine learning, and natural language processing has given digital reporting platforms a fresh perspective. These technologies make it possible for HR staff and other executives to be more aggressive in seeing areas where misbehavior or ethical violations may be on the rise and to address them before they snowball into litigation, scandals, or unwelcome press headlines. Another illustration is an AI system has the potential to erase discriminatory and racial bias in bank loan systems. The decision of whether or not to grant a bank loan has historically been tainted by stereotypes against protected qualities, including race, gender, and sexual orientation. Such biases are evident in the decisions made by institutions on who receives credit and under what conditions [58]. The use of AI in this situation could improve the fairness of the bank's decision-making. The removal of bias from data before a model is developed, the selection of better goals for discriminating models, and the introduction of an AI-driven adversary are all things that must be properly considered [58].
Lastly, we believe that at its current state, AI is not ready to be deployed as an independent decision-maker. There are many cases in which such deployments lead to an unwanted situation. For instance, in June 2022, it was reported that Tesla's running autopilot was involved in 273 crashes reported since last year [59]. There is still a long way to go in both AI development and AI ethical guidelines formulation before we are able to fully exploit AI as an autonomous decision-maker.