If you've been keeping up with movies and shows like Ex Machina and Westworld, you'd know that the current vision of science fiction future is one I hope doesn't one day become a dystopian non-fiction present. Inspired by the 1973 film, HBO's Westworld tackles the idea of taking robotics and Artificial Intelligence (AI) to the extreme, while Ex Machina observes the morality of creating artificial "life".
It would seem that SpaceX and Tesla Motors billionaire founder Elon Musk has been binge watching some AI sci-fi leading him to ponder our future. He has famously denounced AI progression, tweeting, "We need to be super careful with AI. Potentially more dangerous than nukes...Hope we're not just the biological boot loader for digital superintelligence. Unfortunately, that is increasingly probable".
The inherent dangers of such powerful technology have inspired several leaders in the scientific community to voice concerns about Artificial Intelligence.
During a reddit AMA, Bill Gates stated, “I am in the camp that is concerned about super intelligence. First the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that though the intelligence is strong enough to be a concern. I agree with Elon Musk and some others on this and don’t understand why some people are not concerned.”
Finally, Stephen Hawking wrote in an op-ed for The Independent, "Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks. In the near term, world militaries are considering autonomous-weapon systems that can choose and eliminate targets.”
According to Observer.com, "The technology described by Mr. Hawking has already begun in several forms, ranging from U.S. scientists using computers to input algorithms predicting the military strategies of Islamic extremists to companies such as Boston Dynamics that have built successfully mobile robots, steadily improving upon each prototype they create."
Just last year, Stephen Hawking joined Elon Musk, Steve Wozniak, and hundreds of others in issuing a letter unveiled at the International Joint Conference in Buenos Aires, Argentina. The letter warns that artificial intelligence can potentially be more dangerous than nuclear weapons.
So did James Cameron predict the future and will Skynet destroy us all? What are your thoughts on the future of robotics and AI?