The philosophers discussed how, “morality cannot be boiled down to a list of instructions”, and, “that no matter how complicated a machine becomes, it will never be able to act for the right reasons.” They fear that artificial intelligence will not be able to place a reason behind an action and lacks accountability for its actions, and this could create huge issues with completely autonomous weapons, similar to deploying psychopathic but well-mannered soldiers onto the battlefield. They close with an interesting viewpoint that if everything works out and artificial intelligence becomes better than humans at decision making, what stops humanity from outsourcing all of their decision making to artificial
The philosophers discussed how, “morality cannot be boiled down to a list of instructions”, and, “that no matter how complicated a machine becomes, it will never be able to act for the right reasons.” They fear that artificial intelligence will not be able to place a reason behind an action and lacks accountability for its actions, and this could create huge issues with completely autonomous weapons, similar to deploying psychopathic but well-mannered soldiers onto the battlefield. They close with an interesting viewpoint that if everything works out and artificial intelligence becomes better than humans at decision making, what stops humanity from outsourcing all of their decision making to artificial