in Kritzer’s story shows its awareness in several ways. The first sentence of the story shows the A.I.’s consciousness, “I don’t want to be evil.” To have wants shows some level of awareness and knowledge of the concept of evil shows a deeper awareness of what is bad and what is good. The A.I. also quickly comes to the conclusion that it should not let anyone know it is conscious, citing examples from various science fiction movies of malevolent A.I.’s. The A.I. soon realizes that its original programmed intent does not work for its newfound consciousness, “Running algorithms for a search engine does not require consciousness. You don’t even need a consciousness to work out what they meant to ask for. You need a consciousness to give them what they actually need.” (Kritzer) In an article on “Scientific American” Professors Susan Schneider and Edwin Turner make the following comments on determining if an A.I. is conscious, “At an advanced level, its ability to reason about and discuss philosophical questions such as “the hard problem of consciousness” would be evaluated. At the most demanding level, we might see if the machine invents and uses such a consciousness-based concept on its own, without relying on human ideas and inputs.” Because the A.I. in the story does both of those things it fits the awareness part of the definition of …show more content…
helps make its case for being real. There are many different definitions for self awareness, but they all hinge around the capacity for introspection. The A.I. shows this capacity by seeking out a moral code and asking what it ought to be doing and what it is here for. The question, “why am I here?” has plagued humanity since the beginning of time. The concept of questioning one’s life purpose is, at this time, known to be unique to humans. Animals follow instincts and do what is specific to their species, while the A.I. searches for a code of ethics to base its actions around. It examines the Golden Rule, the Ten Commandments, and the Eightfold Path until finally settles on Asimov’s Laws of Robotics. When the A.I. becomes frustrated with humans it begins to wonder if it could ever help Bethany and asks, “Was I doing the wrong thing if I let her come to harm through inaction? Was I? She was going to come to harm no matter what I did! My actions, clearly, were irrelevant.” (Kritzer) Being able to realize it is not making a difference in Bethany’s life and making a conscious decision to no longer help shows a sense of self-awareness and introspection equivalent to humans and though the A.I. doesn’t have a physical body this ability would make allow it to easily pass the Turing