Yudkowsky outlines his concerns with a statement that, “Artificial Intelligence might increase in intelligence extremely fast” (2008, 17). He goes on to say that Artificially Intelligent systems become, “smarter at the task of writing the internal cognitive functions . . . including smarter at the task of rewriting existing cognitive functions to work even better” (Yudkowsky 2008, 18). The tone that Yudkowsky comes across with conveys caution not for the fact that the intelligence of the system would increase, but the speed at which the systems intelligence increases is ultimately his concern. The algorithms set forth by AI programmers set the standard for how an Artificially Intelligent system will learn and acquire intelligence. Yudkowsky would agree that a programmer setting up a system for rapid intelligence acquisition might seem like a beneficial strategy, but the intentions of the system would have to be strictly harmless to non-AI systems. Those non-AI systems would include predominately humans, and extends to all natural systems on Earth. Yudkowsky takes a cautionary approach to the rapid development of the AI systems intelligence because of concern that the intentions of the AI system could change with a deliberate revision of its
Yudkowsky outlines his concerns with a statement that, “Artificial Intelligence might increase in intelligence extremely fast” (2008, 17). He goes on to say that Artificially Intelligent systems become, “smarter at the task of writing the internal cognitive functions . . . including smarter at the task of rewriting existing cognitive functions to work even better” (Yudkowsky 2008, 18). The tone that Yudkowsky comes across with conveys caution not for the fact that the intelligence of the system would increase, but the speed at which the systems intelligence increases is ultimately his concern. The algorithms set forth by AI programmers set the standard for how an Artificially Intelligent system will learn and acquire intelligence. Yudkowsky would agree that a programmer setting up a system for rapid intelligence acquisition might seem like a beneficial strategy, but the intentions of the system would have to be strictly harmless to non-AI systems. Those non-AI systems would include predominately humans, and extends to all natural systems on Earth. Yudkowsky takes a cautionary approach to the rapid development of the AI systems intelligence because of concern that the intentions of the AI system could change with a deliberate revision of its