Some robotic technology is already being used in the military, such as drones and tests are being done on new applications all the time. As stated by Patrick Lin in the article “The Big Robot Questions” for the Slate.com “As any fallible human would be, roboticists and computer scientists are challenged in creating a perfect piece of very complex software. Somewhere in the millions of lines of code, typically written by teams of programmers, errors and vulnerabilities are likely lurking. While this usually does not result in significant harm with, say, office applications, even a tiny software flaw in machinery, such as a car or a robot, could potentially result in fatalities. In October 2007, a semi-autonomous robotic canon deployed by the South African army malfunctioned, killing nine “friendly” soldiers and wounding 14 others. Experts continue to worry about whether it is humanly possible to create software sophisticated enough for armed military robots to discriminate combatants from noncombatants, as well as threatening behavior from nonthreatening.” (Linn) Flaws in new products or machines are always expected, however with machines being that dangerous in the proper hands it would be unfathomable to know what that kind of technology could do in the enemy’s hands. On the positive side, if we could get some of our troops off the front line and replace them with an automated machine, this would most likely cut down the number of American casualties on the battlefield. However, there is also a downside to replacing that front-line soldier with a robot. Some decisions made such as whether to pull the trigger or wait, to consider the possible civilian casualties, these are decisions that would be extremely difficult if not impossible to program into a machine. Such decisions would require conscious thought, not just a program telling a machine if the confirmed target should be engaged or not. This can all be summed
Some robotic technology is already being used in the military, such as drones and tests are being done on new applications all the time. As stated by Patrick Lin in the article “The Big Robot Questions” for the Slate.com “As any fallible human would be, roboticists and computer scientists are challenged in creating a perfect piece of very complex software. Somewhere in the millions of lines of code, typically written by teams of programmers, errors and vulnerabilities are likely lurking. While this usually does not result in significant harm with, say, office applications, even a tiny software flaw in machinery, such as a car or a robot, could potentially result in fatalities. In October 2007, a semi-autonomous robotic canon deployed by the South African army malfunctioned, killing nine “friendly” soldiers and wounding 14 others. Experts continue to worry about whether it is humanly possible to create software sophisticated enough for armed military robots to discriminate combatants from noncombatants, as well as threatening behavior from nonthreatening.” (Linn) Flaws in new products or machines are always expected, however with machines being that dangerous in the proper hands it would be unfathomable to know what that kind of technology could do in the enemy’s hands. On the positive side, if we could get some of our troops off the front line and replace them with an automated machine, this would most likely cut down the number of American casualties on the battlefield. However, there is also a downside to replacing that front-line soldier with a robot. Some decisions made such as whether to pull the trigger or wait, to consider the possible civilian casualties, these are decisions that would be extremely difficult if not impossible to program into a machine. Such decisions would require conscious thought, not just a program telling a machine if the confirmed target should be engaged or not. This can all be summed