Could we get out of our hands? Will we end up in a world dominated by robots? Are we dumber than computers? Is it the end, gentlemen?
The discussion has been strangely hot this year, so we decided to see what the most brilliant contemporary minds in the field of technology think about and this is what we found.
One of the most pessimistic views comes from the hand of Tesla’s co-founder, Elon Musk, who practically owns an electronic-car empire and is on a mission to colonize Mars. He said that he has faced with the most avant-garde AI technology that exists, and I think people should be anxious about the issue … I’m regularly warning people of the problem, but until we see robots on the streets killing the population, They will not know what to do because they consider it very ethereal.
What to do then? Everything points to the proposal of Musk is reduced to create a series of regulations and restrictions for technology developers. However, he still has no concrete idea of how to move forward in that direction; the important thing is to be informed as much as possible about the issue and its possible repercussions.
On the other side of the group is Mark Zuckerberg, the founder of Facebook, with a much more relaxed view about it. And look at the analogy of boxing was not random: for the last few months, Mark and Elon have faced through social networks and the press for their very different vision about the future of AI.
While one is afraid of an apocalyptic future dominated by machines, the other thinks that thinking this way is an alarm. “I’m optimistic, I believe that we can build things and the world will improve with them, but especially with Artificial Intelligence systems, I am very optimistic … I think that the people who try to raise these apocalyptic scenarios are very negative and somehow I think that’s a lot Irresponsible, ” the Facebook founder said in a live interview.
In that sense, he believes that the underlying problem lies in how humans develop and apply technology. What in his view does not justify the creation of instances to restrict or contain the advance of AI, a phrase that did not fall at all well with Tesla’s co-founder? Your answer? “I’ve talked to Mark about it, but his understanding of the matter is limited.”
No doubt Artificial Intelligence has been an issue this year. Not for nothing, Stephen Hawking surprised the world by announcing his idea to prevent this technology from getting out of hand. This lies in their fear that technological development will advance in such a way that we end up having overdeveloped machines. And why should they be aggressive with ourselves? According to Hawking, because the aggression is part of our nature and, therefore, it would not be rare that we transmitted it to the robots.
That is why his proposal based on redefining what we understand as a government and bring it to a global level. The idea is that there is an organization with a global reach able to control the progress of AI (to avoid ending up killing us all). Although, of course, that could give way to a planetary tyranny if things get out of hand and Hawking knows.
Now, it may be a very revolutionary idea, but the point is well understood: the physicist believes AI could become a danger to humanity. Therefore we should do something about it. Very in line with what Elon Musk raises, but entirely different from the vision of the future that has the optimist Mark Zuckerberg.
Many experts have reflected on how threatened we should feel in front of AI.
Bill Gates, had already pointed out in 2015: “At first machines will do a lot of work for us, and they will not be super smart. That should be good if we handle it well. Although a few decades later, intelligence will be strong enough to be a concern.”
Fabio Gandour, head of research at IBM, says: “I’m not afraid of machines, I’m scared of people. In the device, one program, if you mind you turn it off. Not with people. I know how to make the machines learn, and I know how to make them work so that that intelligence favors me. I’m more afraid of people who use aptitude unpredictably.”
For Jorge Baier, academic of Computer Science of Engineering UC has to do with the misuse that can be given to him, as offensive ends. He does not see a threat like that “one day we write a program and the program gets out of control and start doing things we do not want it to do and start destroying things.”
Francisco Watkins, Dept. Of Electrical Engineering of the Usach shares the vision and adds that AI “can emulate creativity, but it has a limit. Man adds an emotive part that makes certain unexpected things happen.”
That brings us finally to the vision of Kevin Kelly, founder of Wired and expert in digital culture, who says that it has to be adopted instead of being feared and that it would lead us to a second industrial revolution. He explains that IAs are very specific and each one fulfills a function to facilitate processes so it would not make sense to create machines that had a human intelligence, with emotions. He also demolishes what he considers myths that lead us to think that machines could overcome or even enslave us.
What side are you on? Do you think Artificial Intelligence could get out of control?