Since the dawn of innovation in artificial intelligence, there have been many people concerned about the machines becoming far too powerful. Michael R. LaChat affirms this concern in his article titled “Artificial Intelligence and Ethics: An Exercise in the Moral Imagination,” stating that we must be careful to not overstep our boundaries and most notably “Don’t play God.” This fear, while supported by pop culture media, is already being accounted for. Industry professionals like Google’s DeepMind, build internal reset buttons to account for if the machines go rogue and try to oppress their creators. Since this fear is so popular, computer scientists developing for artificial intelligence have made extensive proactive measures to account for any display of hostility from the machine. However, as Nick Bostrom and Eliezer Yudkowsky point out in their paper, “The Ethics of Artificial Intelligence,” it is hard to make artificial intelligence unbiased, as they are founded upon machine learning, a concept where a machine is to interpret data and find patterns to replicate the results in future situations, which uses data that was gathered by humans, which, by nature, are biased. In the paper, the authors describe a scenario about a computer using a machine learning algorithm to accept and reject loan applications based on prior data. …show more content…
Cyber-terrorism is already an extremely difficult problem for software developers. The more complex a program becomes, the harder it becomes to defend it from hackers. Metaphorically speaking, until every crevice in the boat has been patched, water will continue to sink the boat, with the program being the boat, the possible access points in the program being the crevices, and the water sinking the boat being the hackers. As someone who has personally developed software would tell you, fixing one hole in the program opens two more, making a perfect solution nigh unfeasible for the man making it. Note the choice of wording there, the “man” making it. As outlines in the previous examples of opposition, I explained how human error is the root of a majority of problems. Imagine a machine being able to fix itself no matter what the problem may be, like an immune system. From being attacked over and over, the machine would use the aforementioned machine learning algorithms to build up a tolerance to these attacks, and much like the human body, would be able to eventually counter these attacks. To build up this immune system-like defense, there would also have to be machines specifically designed to find problems in the other machines at a rate that humans could never replicate. Similar to a hyena and a cheetah hunting a gazelle, the cheetah (the