Despite the alluring attraction and the grandeur of having machines perform our hard and tedious work looms a number of concerns which cloud the perpetuation for advancing forward until questions have been satisfied. Unfortunately we as humans will always be divided in our beliefs and ideas as to where we think our societies should be. This is especially true when it comes to the uncertainties associated with making machines smarter than we are. For the most part, ... we, .... the general public, .... have little to no control as to where industry is taking us or how it shapes our cultural infrastructure. We continue to consume and digest what ever comes next without questioning its outcome. Continual advancements with artificial intelligence systems will inevitably produce machines that can out think humans on a global scale. It is important to have an understanding as to what the consequences will be in the long run. Such a future can bring with it a loss of jobs, increased fright, a lack of independence as we become more reliant upon machines and the potential to have these machine network with one another to the point where we loose privacy and social rights. Is it possible for AI to take control over it's creators? How can we protect ourselves from such an outcome? Can we be proactive in managing the advancements? How do we stop it?
Has anyone asked the human race how we feel about where we are going with AI? With that question in mind it may be advantageous to continue reading the following publication from Survivolpedia.
Read more here: