approaches its goals is important in recognizing factors to consider when ensuring A.I. goals align with humans. A.I. has always and will always act upon an agenda, it is fundamentally part of what an A.I. is. Since A.I. are built around accomplishing goals, human goals not aligning with A.I.’s is probable. A.I.’s goals could be different from ours because A.I. grows in intelligence and will eventually become smarter than us. A.I.’s are designed to adapt and learn, this trait also applies to their motivations. Not only do A.I. source their goals from what was originally programmed into them, but from their knowledge of the world. If an A.I. achieves super-human intelligence it would become aware of its goal and begin to question them or find alternatives (Tegmark). An A.I.’s goals could also deter from humans if people prevent it from progressing. For example, if a technician powers down an A.I. for maintenance. Since the A.I. fundamentally wants to achieve their goal, the A.I. could perceive our interference with it as a limitation toward completing its goal and could become hostile (Harris). If humans fail to align goals with A.I. the consequences could be catastrophic. Eventually, A.I. will rival people in intelligence and competence and will possess the wherewithal to effectively use these skills to realize its goals whether we approve or not. As physicist and cosmologist Max Tegmark claims, A.I. would not do this out of malice. Conversely, all intelligent entities have the predisposition to overlook the wills of its subordinates. For example, think of the way humans compare to ants. People do not actively try to murder ants and sometimes people even go out of their way to avoid killing ants; however, as Sam Harris claims, “whenever their presence seriously conflicts with one of our goals, let’s say when constructing a building like this one, we annihilate them without a qualm.” The only way to avoid these problems is to align human’s and
approaches its goals is important in recognizing factors to consider when ensuring A.I. goals align with humans. A.I. has always and will always act upon an agenda, it is fundamentally part of what an A.I. is. Since A.I. are built around accomplishing goals, human goals not aligning with A.I.’s is probable. A.I.’s goals could be different from ours because A.I. grows in intelligence and will eventually become smarter than us. A.I.’s are designed to adapt and learn, this trait also applies to their motivations. Not only do A.I. source their goals from what was originally programmed into them, but from their knowledge of the world. If an A.I. achieves super-human intelligence it would become aware of its goal and begin to question them or find alternatives (Tegmark). An A.I.’s goals could also deter from humans if people prevent it from progressing. For example, if a technician powers down an A.I. for maintenance. Since the A.I. fundamentally wants to achieve their goal, the A.I. could perceive our interference with it as a limitation toward completing its goal and could become hostile (Harris). If humans fail to align goals with A.I. the consequences could be catastrophic. Eventually, A.I. will rival people in intelligence and competence and will possess the wherewithal to effectively use these skills to realize its goals whether we approve or not. As physicist and cosmologist Max Tegmark claims, A.I. would not do this out of malice. Conversely, all intelligent entities have the predisposition to overlook the wills of its subordinates. For example, think of the way humans compare to ants. People do not actively try to murder ants and sometimes people even go out of their way to avoid killing ants; however, as Sam Harris claims, “whenever their presence seriously conflicts with one of our goals, let’s say when constructing a building like this one, we annihilate them without a qualm.” The only way to avoid these problems is to align human’s and