Categories
Latest
Popular

Are Artificial Intelligence and Nanotechnology Threats to Civilization?

By Alejandro Zorrilal Cruz [Public domain], via Wikimedia Commons

By Alejandro Zorrilal Cruz [Public domain], via Wikimedia Commons

The posts on this blog about artificial intelligence and nanotechnology remotely depict them in a bad light. They are technologies that have mostly been beneficial to mankind, as depicted by almost all news and articles published on them. However, a recent report says that artificial intelligence and nanotechnology could be threats to civilization. These technologies have been listed alongside nuclear weapons, super volcano eruptions, and ecological catastrophe as “risks that threaten human civilization.”

Global Challenges Foundation Report

The report comes from the Global Challenges Foundation, an organization created to raise awareness on global catastrophic risks. Titled “12 Risks that Threaten Human Civilisation,” the report aims to promote global collaboration in addressing risks and impacts. The risks presented in the report are classified into 4 categories: Current Risks, Exogenic Risks, Emerging Risks, and Global Policy Risks. There are four risks listed under Current Risks namely, extreme climate change, nuclear war, ecological catastrophe, global pandemic, and global system collapse. Under Exogenic Risks, there are two risks mentioned. These are major asteroid impact and the eruption of a supervolcano. For Emerging Risks, there are four mentioned: synthetic biology, nanotechnology, artificial intelligence, and uncertain risks (as redundant as it may sound). Meanwhile, the only Global Policy Risk cited is future bad global governance.

Image credit: Global Challenges Foundation (http://globalchallenges.org/)

Image credit: Global Challenges Foundation (http://globalchallenges.org/)

Why Is Artificial Intelligence a Risk?

The report, written by Dennis Pamlin and Stuart Armstrong, considered AI a risk because of the possibility that future machines and software with “human-level intelligence” may create new and dangerous challenges for mankind. The usefulness of AI technology is not being disregarded by the report as it acknowledged how it can also be helpful in combating risks and making things easier for people. Still, the report claims that “such extreme intelligences could not easily be controlled (either by the groups creating them, or by some international regulatory regime), and would probably act to boost their own intelligence and acquire maximal resources for almost all initial AI motivations.”

The report also suggests a rather sci-fi-movie-like scenario, claiming that “if these motivations do not detail the survival and value of humanity, the intelligence will be driven to construct a world without humans.”

These foretellings certainly don’t sound like immediate concerns so it’s unlikely for many to be worried upon reading them. Some may even have sarcastic reactions. The report, however, includes more “realistic” explanations or predictions that can be considered as immediate concerns. Specifically, the report forewarns that “economic collapse may follow from mass unemployment as humans are replaced by copyable human capital.” Additionally, the use of AI in arms race and wars is very likely.

Why Is Nanotechnology a Risk?

For nanotechnology, the report also states an acknowledgment of its usefulness as “atomically precise manufacturing” is deemed to result in a wide range of benefits. However, the report warns of nanotechnology’s ability to produce “new products – such as smart or extremely resilient materials – and would allow many different groups or even individuals to manufacture a wide range of things.” The risk here, accordingly, is the possibility of producing “large arsenals of conventional or more novel weapons.”

Image credit: Global Challenges Foundation (http://globalchallenges.org/)

Image credit: Global Challenges Foundation (http://globalchallenges.org/)

Are These Enough to Be Considered Risks?

Taking everything into consideration, it can be said that the reasons cited by the Global Challenges Foundation Report, for nanotechnology and artificial intelligence in particular, are remotely alarming. They don’t really sound as serious as how they are being presented. The benefits and advantages brought about by nanotechnology and artificial intelligence still overwhelm the risks. At this point in time, concerns over the use of these technologies in warfare and their potential for adverse economic impact are hardly substantiated. These risks remain to be in the realm of theory and science-fiction. Unless, actual demonstrations of the risks are shown, the report published on The Guardian about AI and nanotechnology’s dangers are alarmist at worst and sensational journalism at best.