Indeed, Bill Gates, Stephen Hawking and Elon Musk have all predicted the potential end to our existence with the development of full artificial intelligence (Dredge). The fears seem to originate in the idea that artificial intelligence will evolve to the point that its intellectual capacity supersedes our own. Specifically their fears relate to the creation of artificial general intelligence as opposed to artificial narrow intelligence …show more content…
As Bostrom said, “Suppose we have an artificial intelligence whose only goal is to make as many paper clips as possible, that machine might decide that wiping out humanity will help it achieve that goal - because humans are the only ones who could switch the machine off, thereby jeopardizing its paper clip-making mission” (Rise of the Machines). The common, but unarticulated thread to all of these concerns does not lie with the danger of the artificial systems themselves. Rather, these dangers could exist only if its programming was inadequate or ineffective. It is for this reason that many scientists, entrepreneurs and technical leaders have expressed concerns over the growth of artificial intelligence. Not unlike the dangers of nuclear weapons, the power of artificial intelligence if not used with caution and respect, could expose us to the doomsday dangers that opponents express. But as with other technologies that provide both danger and benefit, developers have the capacity to control and limit risk. Proper programming will enable these intelligences to understand human behavior and act within a set of rules and guidelines.