The question may raise a range of ethical questions. Quach explores the negative side to the recent development in artificial intelligence (AI) in her article, ‘How machine-learning code turns a mirror on its sexist, racist masters’ (Quach, 2018), She expresses a view that the data set or information learned and recycled by the AI may inevitably reflect the human bias, sexist and discriminatory mind set of the coders and users.
With this in mind it is proposed that AI could be considered a dangerous blank slate of an artificial life waiting to be created. A system, if left unchecked, will …show more content…
To express this, research that was taken to examine and highlight the level of racial content as televised in Chicago had found it would “commonly portray accused Black criminals in scowling mug shots or in video clips being led in handcuffs by White police officers” (Welch, 2016). Thus creating and promoting a negative stereotype amongst the primarily white middle-class viewers. While the use of the media to create racial profiles to shape social attitudes has been explored in some depth; the area of gender bias and sexist attitudes has only been examined in any seriousness through academic feminist scholarly investigation in the last fifty years.
Brooks and Hébert assert that the “media are crucial in the construction and dissemination of gender ideologies and, thus, in gender socialization” (Brooks & Hebert, 2006, pp. 297-317).
Brooks and Hébert also explain how there has been a general “neglect which women of colour, specifically black women, have experienced through their selective [exclusion] in the writings of feminist cultural analysis” (Brooks & Hebert, 2006). This reference brings together the two strands of media bias and discrimination; the one towards race while the other concerning …show more content…
“Tay’s goal was to learn and mimic the personality of a 19-year-old woman, and it would appear [as hoped] that popular social networks among millennials were a great place for Tay to learn from” (Pierson, 2016). Unfortunately for Microsoft, Tay was manipulated by internet trolls to learn and repeat a range of racist and defamatory concepts and beliefs. Pierson explain this by saying that “Tay had the ability to incorporate new ideas into her own, but no guiding mechanism in place to help her to identify useful and useless information” (Pierson, 2016). Another example of the limitations of an AI system to comprehend and work within a very human world was seen with Telstra’s development and use of Codi as a service providing Chatbot. Telstra developed the chatbot as to be able to achieve unique aspects that a human is unable to perform. These achievements are noted as, knowledge of multiple languages, able to handle multiple conversations, faster response time and cost effective (Live Chat Agents vs Chat Bots: The Future of Customer Service, 2017). Yet as with any multi-level platform there will always be the possibility of unforeseen