Professor Diehl
ENGR 107-003
8 December 2016 The automobile industry is one of the fields that humans relies on the most. Cars get people to and from their jobs, family events, vacations, entertainment, and more; without them the world would not be able to run as smoothly as it currently does. There is a bad side to automobiles though, per the Association for Safe International Road Travel “nearly 1.3 million people die in road crashes each year, on average 3,287 deaths a day.” To try and prevent such a staggering amount of death the automobile industry is trying to create self-driving vehicles. Cars that will no longer require human operation and can therefore prevent human error. This advancement in technology is great and …show more content…
This is evident in the advancement of self-driving vehicles. While there is a positive side to the idea of a car operating without human intervention there is a negative side to it as well. The issue stems from a similar ultimatum often given to people in ethics class. The question is this, if there was a train the could either split one way and kill five workers or split off the track another and kill one person and you were required to pull a switch and decide who died, would you kill one person or five? The issue with a self-driving car is not the exact same problem but it is a similar one. If a crash between a self-driving vehicle and another truck was unavoidable unless the self-driving car either swerved into an SUV or a motorcycle, how would the car choose? Would the car take a chance and hit the SUV which is ultimately safer for the passengers in the SUV but could still destroy the self-driving car, or would the car hit the less safe motorcycle which would keep the passenger in the self-driving car safe but put the motorcyclist at a higher risk? How could a code make this decision when it’s difficult even for a human to make it? On top of this what if a person died because of the choice the self-driving car made? The car was coded, by a human person, so would it be considered man slaughter by the coder, murder, or neither? These are the major issues behind a self-driving …show more content…
By hitting the motorcyclist, the car might kill the person on the motorcycle but save the car’s passenger, and by hitting the SUV the passengers in the other vehicle might be safer but it puts the passenger in the self-driving car more at risk. In regards to the train dilemma per “The Messy Ethics of Self-Driving Cars” by Wendover Productions it is often seen that people are more likely to move the train to the path that only kills one person. That is the majority choice made by people who can process such ethical decisions. It was not an easy one though, and if they were asked a similar question about stopping a trolley from killing five by pushing one that would stop its movement in front of the trolley many said no. People did not want to play such an active role in the death. People struggle with these complicated decisions even though they have the mental ability to process them. A car that is programed though, will not be able to make the same kind of judgement calls, the car will make the same decision every time. That is regardless of if that decision is the right one, mostly because there is no true right decision when it comes to deciding who lives and who dies. If a car starts to make the decisions about who lives and dies in a car accident it is possible that many people will feel this is wrong and that the car has too much power. They quite possibly