Preparing for an AI Apocalypse Is As Preposterous As Preparing for an Alien Invasion


 











Preparing for an AI Apocalypse Is As Preposterous As Preparing for an Alien Invasion

Several AI industry leaders and researchers signed an outlandish statement this week claiming AI systems pose an existential risk to humanity and urging policymakers to prepare for them with the same urgency they give nuclear war and pandemics. Their doomy predictions are nothing more than catastrophizing, but unfortunately, such hyperbolic claims fan the flames of AI fears and detract from productive discussions on how to ensure AI is developed and deployed in ways that serve society.

The statement is one sentence long and reads: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war.” The signatories believe companies are on the brink of developing out-of-control “superintelligent” AI systems with intellect that “greatly exceeds the cognitive performance of humans in virtually all domains of interest” as Oxford philosopher Nick Bolstrom puts it. Most advocates of this view acknowledge that AI systems are not superintelligent yet but point to the rapid advancements in the field and the race between a few companies to create “God-like AI” as evidence that superintelligent systems might be created in the near future. They also argue there is a possibility these advanced systems will be hostile and unaligned with human goals, resulting in them either actively destroying human civilization to pursue their own goals or passively causing human extinction by outcompeting humans for resources such as land and energy. 















They provide no evidence—or even explanation—as to how these dystopian beliefs would come to pass and conveniently ignore that they are pulling from the playbook of claims about out-of-control AI that are cyclically trotted out, especially when advances in AI make continued progress seem inevitable. Yet decisively refuting hypothetical claims about the future is tricky because something may be possible, even if not probable. Therefore, instead of trying to prove that their doomsday predictions are wrong, it is worth considering whether their reasoning makes sense when applied to other domains.

By their logic, policymakers should immediately elevate the threat of an alien invasion to a global priority too. After all, many expert astronomers and astrophysicists have concluded intelligent extraterrestrial life-forms might exist because the laws of math and physics indicate as much—there are billions of galaxies, each potentially containing billions of stars, meaning conditions suitable for life (including intelligent life) likely exist elsewhere. Data from NASA’s exploratory missions, the existence of thousands of exoplanets, and continued government funding for exploration further point to the likelihood of intelligent alien life. Moreover, some of the brightest experts have long warned that making contact with these life forms would spell the end for humanity. Famed physicist Stephen Hawking was one of the loudest stating “advanced aliens would perhaps become nomads, looking to conquer and colonize whatever planets they could reach.”

Fortunately however, alarmism about the risk from intelligent alien life forms making humans extinct has been met with caution and pragmatism from scientific and policy communities. Nations haven’t stopped exploring space because of far-flung, speculative claims that aliens might annihilate humanity. Policymakers are not restricting companies from developing more advanced spacecraft because of concerns that it is a slippery slope to making unintentional contact with a hostile alien species. And no world leaders are suggesting that nations should prioritize the Earth’s extraterrestrial defenses at the same level as its pandemic preparedness because of the increased number of unidentified aircraft sightings. Likewise, policymakers should not pump the brakes on AI advancements because of equally offbeat claims that doing so is necessary to save human civilization.

Unfortunately, the fear around AI is only likely to continue. It is critical policymakers remain clear-eyed amid AI hyperbolizing because there is a risk they will be distracted from focusing their political energy on the important policy work they should be doing to ensure robust AI innovation and deployment in all the ways that could benefit society, such as to improve health care, help children learn, and make transportation safer. And who knows, it might even come in handy in an alien invasion.

Comments