Elon Musk’s AI Warning Has Never Sounded More Dire

February 8, 2021 • Shannon Flynn

Advertisements

If there’s one thing that’s certain about Elon Musk, it’s that he’s outspoken. As bold and brave as the technology world might be, few tech leaders operate with Musk’s level of — sometimes embarrassing for those closest to him — public candor.

But when Musk isn’t mobilizing against pronouns on Twitter, he’s raising awareness of the potentially manifold dangers of artificial intelligence.

Is Musk right? Is AI “more dangerous than” nukes? Let’s dive into what the Tesla-SpaceX-Boring Company mastermind has to say about this quickly-advancing frontier in modern technology.

Musk on the Record

Musk has spent years on the record as one of the most vocal critics of unchecked AI development. This has, more than once, placed him on a global stage alongside the leaders of nations.

In September 2017, during an address to a graduating class of students, Russian President Vladimir Putin addressed both the opportunity and the threats of artificial intelligence. AI, Putin said, “…comes with colossal opportunities, but also threats that are difficult to predict. Whoever becomes the leader in this sphere will become the ruler of the world.”

Musk was quick to pounce. At first, he tweeted, simply, “It begins…” before later elaborating:

“China, Russia, soon all countries w strong computer science. Competition for AI superiority at national level most likely cause of WW3 imo.”

Musk is the one who previously compared building AI to “summoning the devil.” In other instances, he’s declared artificial intelligence a more urgent threat to the survival of humanity than nuclear weaponry.

In a documentary called “Do You Trust This Computer?”, Musk warned that autonomous intelligent machines could unleash “weapons of terror” and become “an immortal dictator from which we would never escape.”

Nothing will affect the future of humanity more than digital super-intelligence,” Musk says.

Is he right? Are all these Elon Musk AI warnings right on the money, or are they more “embarrassing” hot air?

Are Other Leaders as Pessimistic as Musk?

Elon Musk is far from alone in sounding the early warning against unregulated, unlimited and unchecked research into machine super-intelligences.

Ray Kurzweil — a futurist, prolific author and pioneer in multiple technologies — agrees with Musk that careless AI development could pose an “existential threat” to humanity. Unlike Musk, however, Kurzweil is optimistic that the humanity will learn to navigate any “difficult episodes” and be better off for it.

Speaking to the Council on Foreign Relations in Washington, D.C., Kurzweil reminded onlookers that technology has “always been a double-edged sword.” “Fire kept us warm, cooked our food and burned down our houses,” he added wryly. But nobody would dream of turning back the clock on the discovery of fire.

“My view is not that AI is going to displace us. It’s going to enhance us. It does already,” Kurzweil offered.

But is he right?

Is AI Like ‘Summoning the Devil’?

Elon Musk’s warnings about AI stem from a simple observation — human beings are the dominant form of life on earth because of our superior brains.

So, what happens when an artificial brain vastly outstrips our own in capacity and capability? Moreover, will this artificially intelligent being see humanity as a partner or a threat? A silicon “creature” meeting the definition for super intelligence could quickly spiral out of our control.

There’s another way to look at the potential problem, too. As far as we know, it’s infinitely easier to program a machine with human-like logic and anticipatory faculties than it is to imbue that same machine with human values like empathy.

It’s also reasonable to assume that a digital intelligence with better-than-human capacity for reason would also have a better-than-human sense of self-preservation. What measures would such an intelligence take to ensure it couldn’t be shut down after achieving awareness?

There’s probably no more serious element of risk than the sheer pervasiveness that AI could achieve in the coming years. Artificial intelligence isn’t possible without the process of machine learning — that is, without training computers using huge datasets.

Big data and the Internet of Things are now global technologies. From personal computing to industry-scale activities, the world is host to billions upon billions of internet-connected devices. There are supercomputers in our homes and pockets, cameras everywhere on earth, radios and logic boards in even the simplest appliances, and nearly-ubiquitous high-speed wireless internet connecting our communication networks and supply chains.

Each node in this gigantic web of connectivity is a potential “eye” or “ear” for an artificial intelligence. It’s not outside the realm of possibility that a poorly-controlled digital super-intelligence could turn this infrastructure — essentially a global surveillance network for all human activity — against its creators.

Is this what Musk’s “immortal dictator” could look like? Are we building a silicon god that can reach into and control any facet of human life on earth?

Some people even argue that the “elder gods” or “leviathans” described in Lovecraftian-type fiction may not be about malevolent creatures returning to earth so much as the coming of a digital god whose tendrils would span the globe.

Lovecraft himself did not apparently believe that humankind could “live according to reason” in a “machine-driven” age. According to Lovecraft:

“Men can use machines for a while, but after a while the psychology of machine-habituation and machine-dependence becomes such that machines will be using the men — modelling them to their essentially efficient and absolutely valueless precision of action and thought.”

Don’t Forget the Counterarguments

Elon Musk believes AI will overtake humankind “in less than five years.” That places us at the precipice of fundamental changes in the technological landscape.

Philosopher and scientist Grady Booch, meanwhile, offers a perspective closer to Kurzweil’s than to Musk’s. “I don’t fear super-intelligent AI,” Booch declared during a Ted Talk. Part of the reason, he says, is because “we can just turn them off.”

“‘Super-knowing’ is very different from ‘super-doing’ … we’re not building AI that can control the weather, that direct the tides, or that direct us chaotic and capricious humans. Moreover, if an artificial intelligence existed, it would have to compete with human economies and resources.

Booch’s most important takeaway is not that the danger isn’t real. It’s the firm hope that humanity will “teach” — not “program” — AI to share our values.

Is Elon Musk’s AI warning founded? Maybe. It’s clear that both Booch and Musk understand the opportunity AI represents. The difference is, one man has a far more cynical view of humanity than the other. The great thing about AI is that it can mirror humanity in the ways that matter. This is, quite obviously, also the greatest worry.

bg-pamplet-2