Musk and Zuckerberg: Is Artificial Intelligence a Threat to Humanity?
Facebook’s Mark Zuckerberg and Tesla’s Elon Musk debated last week over what the future of artificial intelligence will be like and should anyone, most likely the government, act on what is perceived as potential risks in the future. Zuckerberg is the one arguing for the positive and Musk is the one arguing about the possible threats of artificial intelligence (A.I.).
The argument between the benefits and the possible threats of A.I. has long been debated over the years ever since humanity first discovered that they could make computers emulate thought. The concept though, that an artificial being can express human intelligence, individuality, and independence has existed long before machines, from Jewish myths of the Golem to Mary Shelley’s famous novel, Frankenstein. Modern stories and movies that involve artificial intelligence such as the Bicentennial Man (1999), Matrix (1999), The Terminator (1984), Robocop (1987) and a bunch of others have also been going on back and forth about what roles machines would take in the future.
But those are fiction and can’t really be used as a good example in an argument. There isn’t any real life example of machines becoming too smart for our good.
The Argument
As Eton Musk puts it, the threat by artificial intelligence is so unreal right now that humanity won’t know how to respond to it until it actually happens. Zuckerberg, when asked about Musk’s views, said that he can’t understand people like Musk. He’s optimistic about the potentials of artificial intelligence and thinks that it’s irresponsible to view anything negative. Musk responds by saying Zuckerberg’s understanding of the topic is limited.
Many things can be said about A.I.; most of them are beneficial to human. Things such as the solution to the lack of workers or the need to put flesh-and-blood soldiers in future battlefields are just some of the things being promoted. Another idea was that there would be no need for humanity to work since the machines can do it all by themselves and people can just live and do whatever they want to do.
But there are people, like Eton Musk, that are concerned that these potentials might also lead to the downfall of humanity. Stephen Hawking, one of the best British men of science, is one such person. He believes that if machines are capable of maintaining themselves then they will be able to evolve at a much faster rate than humans. Humans, who are impaired by how slow biological evolution is, would lose to the competition and would be “replaced”. Even with safeties coded into the machines, like The Law Of Robotics, they might find a way to evolve into something that isn’t intended by its makers. This is considered as one of the greatest risks from A.I., according to the textbook “Artificial Intelligence: A Modern Approach” by Stuart Russell and Peter Norvig.
However, some would argue that it isn’t A.I. that would cause problems but rather the humans who would use the technology. These people also argue that an A.I. that’s beyond the control of humans doesn’t sound plausible. A survey by MIT Technology of A.I. researchers shows that 25% of A.I. researchers believe that “superintelligence” will never be possible.
Musk’s Irony
The irony here is that, while Musk points out the threats, his companies develop A.I. technologies. Tesla, for example, is developing an autonomous car, reportedly ready by 2018. The fact is that autonomous machines also rely on A.I. technologies. Musk also helped set up OpenAI with the aim to develop “safe artificial and general intelligence.” But if there are dangers to it, the greatest one being the extinction of the human race, why even develop it? Humanity can just outright ban A.I. research to steer clear from the possible dangers it might present in the future.
It seems that Musk, like all other capitalists, can’t ignore the benefits as well. He might be thinking “if artificial intelligence is to exist, it might as well be from someone who’s very aware of the dangers it presents.” Some people may agree.
Facebook’s “Panic”
Facebook shouldn’t be talking positively either. Just this week they shut down an experiment where they had chat bots negotiate each other over trading things. It ended up with the bots creating a short hand language only the bots understand. They denied that they panicked about it. They said that they shut it down because the bots behaved in a way they didn’t intend to.