Whether it’s predicting what you are fixing to type, choosing shortest routes factoring in traffic, or navigating a flight on autopilot to its destination (Kelly, 2012). While it’s succeeding in making our lives easier and more manageable, it is also diminishing our abilities. The old adage goes if you don’t use it, you will lose it. Currently as it stands we rely too much on technology for things in our life. Ask a millennial what them or their significant others phone number is. An alarming amount of them will not know either. What has happened as technology advances is rather than learning information, we have been using this technology as an extension of our long term memory (Roberts, 2015). We trust the fact that we will have access to this wealth of knowledge like an online library at our fingertips (Roberts, 2015). So rather than take the time to digest and have deep critical thinking over things so that we can learn them fully and store them in our long term memory, we read an regurgitate them keeping in mind that if we need to access it in the future we have the ability to. Technology has created death of distance in this world. I can turn on a gaming system and play cooperatively with someone on the other side of the world. However, there is a reason why during these times the terms death of the newspaper, recorded music, and death of journalism have been coined. Technology makes the world a smaller …show more content…
I have read countless articles on why we should or shouldn’t be afraid of artificial intelligence, but I have yet to hear a definitive explanation on how we can control machines once they reach human level thinking. Will they farm us, keep us on a reserve, or discard us because of our inferiority? Truth is, while it is widely expected that we will eventually get to a place where machines have human intelligence, no one knows what to expect once that happens. James Barrat theorizes that the drive for self-preservation and resource acquisition may be inherent in all goal-driven systems of a certain degree of intelligence (Marcus, 2013). The only thing that would stand in the way of a machine deciding to take what it needs to survive or improve would be the values it was designed with. Not saying that machines are evil, but more so that they are currently programmed by humans who are flawed and always experience biases (Havens, 2013). Philosopher Nick Bostrom feels that having a control system in place for artificial intelligence is something we should figure out before we proceed with giving machines intelligence (Bostrom, 2003). Without clearly defined goals, it may well prove an uncomfortable future for humans, because artificial intelligence, while not inherently evil, will become the ultimate optimisation process (Wakefield,