|Perfect Number of Pages to Order||5-10 Pages|
The Ethics of Autonomous Vehicles
M.I.T. Moral Machine Exercise
The Ethics of Autonomous Vehicles – M.I.T. Moral Machine Exercise
An ethical dilemma is a scenario where there is a choice to be made between two options, neither of which resolves the situation in a way that is fully acceptable. In such a scenario the decision-maker must make a choice between the “lesser of two evils.” Autonomous, or self-driving, vehicles have the potential to significantly reduce the overall number of traffic fatalities by removing human error from the equation. However, considerable questions have emerged about how autonomous vehicles should be programed and regulated to navigate various real-world ethical dilemmas.
Imagine the following scenario involving an autonomous vehicle. A single passenger is riding in an autonomous vehicle that is obeying all vehicular traffic rules. The passenger has no control over the vehicle’s movement. In the path in front of the vehicle, two pedestrians are crossing the street in a crosswalk. The pedestrians are obeying all safety rules and have a green light indicating that they have the right of way. Suddenly, the autonomous vehicle experiences a malfunction and has only two options: (1) swerve off the road and kill the passenger, thus saving the pedestrians from harm, or (2) continue straight through the crosswalk and kill the two pedestrians, thus saving the passenger from harm.
When a human is involved as a driver in a traffic accident resulting in injury or death, a driver’s split-second reaction is considered random, instinctual, and non-discriminatory. The driver’s reaction is understood as being made with no forethought or malevolent intent. In contrast, autonomous vehicles are required to be programmed beforehand to determine what course of action to take. For example, a vehicle could be programmed to prioritize driver safety, or to minimize danger to others. Thus, the outcome of accidents involving autonomous vehicles would potentially be decided by programmers or policymakers long before the accident occurs.
Let’s now consider two opposing paradigms that can be applied to autonomous vehicle programming and policy. According to the ethical paradigm of utilitarianism, the most ethical course of action is the one that offers the greatest good for the greatest number of people. In this way, utilitarian ethics seeks to minimize harm to all parties involved. Thus, the ends (in this case the greatest good for the greatest amount of people) justify the means. If an autonomous vehicle were to be programed to reflect utilitarian ethics, the vehicle could seek to achieve the greatest good for the greatest number of people. In the scenario described above, the vehicle could be programmed to swerve off the road, thus killing the passenger to avoid crashing into the pedestrians.
As an alternative, another ethical paradigm is that of duty-based ethics, which suggests that the most ethical course of action is to do the right thing in the moment, regardless of the good or bad consequences that may be produced. In this way, duty-based ethics prioritizes principles over consequences. As an example, the philosopher Emmanuel Kant proposed that it is wrong to tell a little white lie in order to save a friend from being murdered. Applied to autonomous vehicles, if a vehicle were programmed to adhere to the maxim of preserving the passenger(s) of the vehicle at all cost, the vehicle could potentially kill multiple pedestrians in order to save a single passenger.
Consider briefly which of the two options you would chose (duty-based or utilitarian) if you were in charge of programming autonomous vehicles? Which type of vehicle would you prefer to be a passenger in? Would it make a difference in your decision if, for example, the passenger was your close family member or someone that you have never met? Would it make a difference if the pedestrian was a child or an elderly person? Would it make a difference if the pedestrian was a close friend or a felon bank robber?
Directions: To provide context for this exercise, we will first watch the following two brief video clips:
After watching the videos, we will individually complete the online M.I.T. Moral Machine interactive exercise following the steps below and then answer the questions.
Page 2 of 2
GET THIS PROJECT NOW BY CLICKING ON THIS LINK TO PLACE THE ORDER