Dangers Of Artificial Intelligence

1938 Words 8 Pages
60-Minutes ran a feature segment on A.I. (Artificial Intelligence). My knowledge of computer technology is below average, consisting mainly of checking email and surfing the internet. As most people, I knew computers were taking over many jobs once done by humans, and had seen footage of auto industry robots assembling automobiles. I remember seeing video of a warehouse with floor computers delivering boxes from one location to the next.
A.I. is different. While watching this segment with family members, I commented, “With little prior knowledge of this technology, I essentially wrote of this in my book. We just witnessed the platform for The Mark of The Beast.”
Until recently, computers have been used to do specific jobs and calculations.
…show more content…
Appearing on The Ted Talks forum, he said, “Artificial Intelligence used to be about putting commands in a box...since then, a paradigm shift has taken place in the field of artificial intelligence. Today the action is really around machine learning...We create algorithms that learn...The result is A.I. that is not limited to one domain. The same system can learn to translate between any pairs of languages, or learn to play any computer game.” Bostrom surveyed the world’s leading A.I. experts asking, “By what year do you think we will achieve human level intelligence?” (human level intelligence being the ability to perform any job at least as well as an adult human). The median answer was by the years 2030-2040. Bostrom then states that in this century, we are going to see an intelligence explosion. He states, “we should not be confident that we can keep a super-intelligence genie locked up in a bottle forever. Eventually it’s going to find a way out. I believe that the answer here is to figure out how to create super-intelligence A.I. that, even if/when it escapes, it is still safe because it is fundamentally on our side, because it shares our values. I see no other way around this difficult problem...We would create an A.I. that uses its intelligence to learn what we value.” Of course, what Bostrom does not consider, and perhaps even recognize is that man is fallen. Exactly who’s values do we use as a starting point to develop this A.I.? How do we prevent other countries with a differing value system from instilling a different set of values; ones that may design machines that eventually see humans, especially Christians, as a

Related Documents