Some other concerns are regarding the possibility that the designer may himself provide AI systems such goals …show more content…
But how can we give up on our hope of having superhuman AI? And how can we guarantee to have both superhuman AI and a safe future for humans? We cannot. The future is precarious, and any promise of safety with certitude is a scam. But what we can certainly do is deliberate on different possibilities of the future. Superhuman AI, over which we will have no significant control, will likely not be servile to us. More ominously, they can be with self-interested goals. However, a violent demise of humans is no way the only outcome of having superhuman AI. In fact, having self-interested AI is not only feasible, but it may also be beneficial to us. Let us consider this …show more content…
And this is more due to the concern of practicality and scalability. If we make a million units of a particular kind of robot, the task of maintaining them and providing for their livelihood will become a tall order. Who should be responsible for all that? The manufacturers, the governments, or the owners? In each of these cases, the reliance is ultimately either on humans or at least on a centrally-organized system. The latter kind is not known for its penchant toward scalability. If we seek scalability beyond human reliance, the handing over of these responsibilities to the AI unit itself seems appropriate. Being autonomous AI systems, they can be naturally expected to be able to take care of these essential tasks regarding their own lives. This is also a natural solution allowing large-scale