human agency can't be ignored
- Oct. 23, 2023
“AI”, is turning into a buzzword worthy of dominating the popular discourse of humans in the year 2023. AI, which stands for Artificial Intelligence, has transcended through the walls of academia, and penetrated popular media. Now, the subject of many mainstream debates ultimately exposing our intrinsic mammalian trait – fear. The fight or flight response underlies much of the non-technical and media-worthy discussion points about AI. Whether in the form of regulation or outright censorship, AI is not being embraced as it should be. The popular perception of AI is accompanied by a somber feeling of helplessness. The symbiotic relationship humans have managed to cultivate with machines should be celebrated, rather than scrutinized. Humans immersed in the classical method of thinking, as opposed to the romantic, that have the opportunity and willingness to participate in the development of the inner mechanics of AI systems should do so. I argue that there are no conditions under which a data scientist should be held morally responsible for a harmful artificial intelligence-based system, application, or media infrastructure that they collaboratively developed. I will do this firstly by using the Doctrine of Doing and Allowing to explain negligence in data science that can lead to harm. Secondly, through using an example of autonomous driving to initiate a discussion about human agency.
Neural networks are considered opaque. The inner mechanics of neural networks have various components modeled by mathematic equations that seem daunting, even to those who are within the field. The problem is that the language they are expressed in, is not accessible to the everyday individual that comes into contact with the output of one of these systems. The inner workings are abstracted away from the end-user, in order for said end-user to receive some sort of value from the system. Neural networks belong to a subfield of AI called Deep Learning. Deep Learning organizes the world into a nested hierarchy of concepts, where each concept is built on simpler concepts (Bengio, et. al, 2016). Contemporary advances in deep learning are motivated by fundamental arenas of mathematics such as: linear algebra, probability theory, information theory, and numerical optimization (Bengio, et. al, 2016). Since much of the literature regarding linear algebra, probability theory, information theory, and numerical optimization are written for individuals in those fields the literature requires an extraneous amount of effort for full comprehension for people that sit outside this domain. There is no obligation for those within the domain to provide any form of explanation to those outside of the domain. Those within the domain are obliged to enforce a standard of requisite knowledge and skill for those attempting to enter this domain. Meeting these requirements leads to a level of trust of those in said domain. This is the only moral obligation of the gatekeepers of the scientific field of Deep Learning. To obtain a scientific occupation that actively applies Deep Learning concepts, the individual must have at minimum a master’s degree in a quantitative field, pass coding interviews where sufficient knowledge of data structures and algorithms are tested, and have some publications from noteworthy academic journals to be a strong candidate. This is the barrier of entry. Those that can overcome this barrier will be able to attend to the ethics of applying their knowledge of Deep Learning in areas of contention. By overcoming the barrier, these scientists are entrusted with the responsibility of ethically conducting their work.
There is no harm in designing and operationalizing a novel neural architecture. Additionally, there is no harm in developing the mathematics and reports that accompany neural architectures. However, there is harm in allowing an individual that cannot overcome the barrier of entry, whether it be through failing the programming tests or not having sufficient training to actively influence the development of an artificially intelligent system through either contributing to the code or design of said system. The Doctrine of Doing and Allowing explains “that doing harm is harder to justify than allowing harm” (Woollard, 2012). Woollard (2012) draws on Philippa Foot to illustrate that the doing/allowing dichotomy is largely based on an individual actions that lead to a negative sequence of events (Woollard, 2012). This situation arises when individuals fail to adhere to the standards set by the profession which can be considered a negative action contributing to a potentially harmful sequence of events. Therefore, the individual data scientist that contributes to a project or product that causes harm is not morally accountable for the outcome of the said project or product. Instead, the individuals that entrusted the individual with the frameworks, access, and opportunity to cause harm, despite not meeting the requirements to practice data science, should be held morally responsible.
Human agency is often left out of conversations regarding harm. Most humans are rational actors and can choose to participate in a system that involves AI or not. For example, autonomous driving cars are the quintessential example of human agency and AI. The human that purchases a car that has a feature for autonomous driving, is largely under the impression that there is feature that can be used if they want to engage this car’s autonomous driving feature. Now if this human chooses to engage the autonomous driving feature of their new car, the human has made the choice to allow a system to make decisions that may have negative consequences. The potential for this negative sequence of events to lead to harmful circumstances is initiated by the human’s decision to engage the autonomous driving feature of the car. Driving a car has its inherent risks while a human is driving, now insert artificial intelligence into the equation. Every human who engages with this feature is shifting all the responsibilities that come with driving to an AI system. As a result, introducing an increased probability of error when driving. This is an illustration of an exercise of human agency. There is a binary choice to be made, one outcome decreasing the influence of the human over the system, and the other increasing the human influence over the system. The individual data scientist commissioned to work on the AI within the car worked on fundamental technology that can alter the state of human economies if applied correctly in critical domains. However, the data scientist cannot be morally responsible for the decisions of individuals old enough to understand what using the autonomous driving feature of a car entails. Many of the individuals that purchase autonomous driving cars do so for the sole purpose of its autonomous driving features. The matrix multiplications at the core of major AI systems, such as recommender systems, natural language processing, autonomous driving and image generation were not developed with any malicious intent. The human who takes these systems and applies them in a method that can cause harm is the one to blame. Human agency should not be ignored.
Ultimately, I argued that there are no conditions that require a data scientist to be morally responsible for an AI system they helped develop. Negligence in providing an unqualified individual with the title of Data Scientist or Machine Learning Scientist thereby allowing the individual to produce science that can lead to a negative sequence of events that may cause harmful circumstances. Additionally, human agency is discredited by shifting the choice to engage with the consequences of AI systems from the consumer or end-user to the individual data scientist who developed or contributed to the creation of the AI system.
Sources:
Bengio, Y., Courville, A., & Goodfellow, I. (2016). Deep Learning. MIT Press.
Woollard, F. (2012). The Doctrine of Doing and Allowing: analysis of the Doing/Allowing Distinction. Philosophy Compass: 448-458.
Back