For example, a man and a child walk onto the road and the car is approaching them at a speed that in which it can’t stop. The car can dodge one of them, but hit the other at full speed. Therefore, the question here is which one does it hit? In the article “Why Self-Driving Cars Must Be Programmed to Kill” it explains that “before they can become widespread, carmakers must solve an impossible ethical dilemma of algorithmic morality.” Therefore, it is important to ask how should the autonomous cars be programmed to react in the case of an unavoidable situation or crisis? Would it be a good idea for it to limit the death toll, regardless of whether it implies killing the passengers, or would it be make sense for it to secure the passengers no matter what? Would it be a good idea for it to pick between these extremes by chance? The responses to these moral questions are essential since they could incredibly affect the way autonomous cars behave in critical situations. Can we really trust a car with human like
For example, a man and a child walk onto the road and the car is approaching them at a speed that in which it can’t stop. The car can dodge one of them, but hit the other at full speed. Therefore, the question here is which one does it hit? In the article “Why Self-Driving Cars Must Be Programmed to Kill” it explains that “before they can become widespread, carmakers must solve an impossible ethical dilemma of algorithmic morality.” Therefore, it is important to ask how should the autonomous cars be programmed to react in the case of an unavoidable situation or crisis? Would it be a good idea for it to limit the death toll, regardless of whether it implies killing the passengers, or would it be make sense for it to secure the passengers no matter what? Would it be a good idea for it to pick between these extremes by chance? The responses to these moral questions are essential since they could incredibly affect the way autonomous cars behave in critical situations. Can we really trust a car with human like