Thomas focuses deeply of humans’ fear and their intellectual journey through the creation of computers and technology. He states “I used to worry that computers would become so powerful and sophisticated as to take the place of human minds. The notion of the Artificial Intelligence used to scare me half to death. Already, a large enough machine can do all sorts of intelligent things beyond our capacities:...Computers can make errors, of course, and do so all the time in small, irritating ways, but the mistakes can be fixed and nearly always are. In this they are fundamentally inhuman, and here is the relaxing thought : Computers will not take over the world; they cannot replace us, because they are not designed, as we are, for ambiguity.” (Thomas 427-78). Through this Thomas explains that because of our imperfection and ambiguities, it is not possible for the artificial intelligent, like computers, to take over the world. Also that these characteristics do bring unwanted fear into our minds, but these fears can be nullified due to our intellectual development. And with the help of Gavett’s article, this topic can be broaden even more, which brings a clear understanding of this topic. In her article, Gavett mentions about the Challenger explosion experiment, where she states “researchers came to these conclusions after putting volunteers through several experiments. In the one, subjects had to decide whether or not a car should be cleared for an upcoming race – a situation modeled directly after the Challenger explosion. One piece of crucial information – the likelihood of a gasket failure (99.99%) – was omitted, but available via a link. Later, the same group was given a similar test in which they had to identify a potential terrorist, with additional information available via email. Those who had taken responsibility for their failure to prevent a car
Thomas focuses deeply of humans’ fear and their intellectual journey through the creation of computers and technology. He states “I used to worry that computers would become so powerful and sophisticated as to take the place of human minds. The notion of the Artificial Intelligence used to scare me half to death. Already, a large enough machine can do all sorts of intelligent things beyond our capacities:...Computers can make errors, of course, and do so all the time in small, irritating ways, but the mistakes can be fixed and nearly always are. In this they are fundamentally inhuman, and here is the relaxing thought : Computers will not take over the world; they cannot replace us, because they are not designed, as we are, for ambiguity.” (Thomas 427-78). Through this Thomas explains that because of our imperfection and ambiguities, it is not possible for the artificial intelligent, like computers, to take over the world. Also that these characteristics do bring unwanted fear into our minds, but these fears can be nullified due to our intellectual development. And with the help of Gavett’s article, this topic can be broaden even more, which brings a clear understanding of this topic. In her article, Gavett mentions about the Challenger explosion experiment, where she states “researchers came to these conclusions after putting volunteers through several experiments. In the one, subjects had to decide whether or not a car should be cleared for an upcoming race – a situation modeled directly after the Challenger explosion. One piece of crucial information – the likelihood of a gasket failure (99.99%) – was omitted, but available via a link. Later, the same group was given a similar test in which they had to identify a potential terrorist, with additional information available via email. Those who had taken responsibility for their failure to prevent a car