Super Clinician Analysis

989 Words 4 Pages
implemented into the psychological field it would need the ability to learn, and this is where the potential danger comes into place. If AI has the ability to learn, they can develop into a Strong AI, or AI that has surpassed our human intelligence, and could be a grave danger to society. He goes on to claim that Artificial Intelligence could in a way represent a “Super Clinician.” This super clinical could be better than a human practitioner as it could be built with advanced technology that can perceive the human body unlike a human. In his work, Artificial Intelligence in Psychological Practice: Current and Future Applications and Implications, Luxton states “The super clinician could be built with advanced sensory technologies such as infrared …show more content…
Though this, Luxton expresses the possibility and potential effectiveness of technology, and how these technologies can even surpass our ability to diagnose and treat individuals. I agree with Luxtons statement whole heartedly, and recognize the overwhelming pros, but also the dangerous cons of the situation. AI can be amazing for us, it can reduce doctor/human error in diagnosing and even detect things that we are unable to without tools, but we also pose the issue of the technology surpassing our knowledge. He also makes reference to IBM’s Watson, which is a super-computer who has the ability to learn, but is restricted. He references, “[There is a] commercially available version of Watson that has learned the medical literature, therefore allowing it to serve as a medical knowledge expert and consultant (IBM, 2013)” (334). This computer is classified as an expert, and likely has more than enough ‘qualification’ to assume the position of a clinical …show more content…
He believes that technology is logical, it lacks empathy and human discretion like a normal human medical professional has. This could potentially be dangerous to patients as they may not get the treatment that they need because the computer cannot consider circumstantial and emotional factors like a human can. They don’t have accessible memory, and cannot retain new information. They are very 2-dimensional beings and this is why they simply cannot be utilized for psychological diagnostics or treatment. This leads to ignorance in the machines, and allows for them to only know things that we program them to know. This cannot be in the medical field because so many circumstances must be taken into consideration while working this field. What led the individual to the state they are in, what are the valid options of treatment, and possibly how it was treated in the past. Because of this, and a robots lack of empathy, they can be dangerous in treating individuals. I partially agree with Sharf in this aspect, as I see his concern. Artificial Intelligence is not a human, and it cannot acquire emotions such as grief, or empathy. A robot cannot empathize with a human, like another human can, and they

Related Documents