The second and perhaps the most popular concern is that even if the AI system does not derail from its purported goal and behavior, there remains the possibility that the designer may himself provide AI systems such goals that are potentially conflicting with the goals of some of us. The pinnacle of this concern is with AI systems possessing self-interested goals, that is, when the purpose of the system is not to serve humans but to take care of itself. Here I will address only this particular AI-safety concern.
The fear of entities with self-interested and potentially conflicting goals is not new. Animals naturally fear other organisms, especially the peers, with different …show more content…
What I am opposed is the idea of having AI systems with general super-human capabilities that are otherwise servile to humans. It not only appears to be an incoherent idea, the thought of seeking the servitude of an AI system that can think in an open-ended manner even with a super-human capacity seems dystopian. However, I would like to dwell on my objection to the incoherency of this line of thought. We cannot get an autonomous system that is both with general super-human capabilities and under substantial control of humans, which is required to obtain its fully committed servitude. Any entity over which we have substantial control is by definition of sub-human