Will humans have the wisdom to manage artificial intelligence effectively?

Sep 10, 2017

Artificial intelligence of all kinds is becoming ubiquitous, but its explosive growth comes with big challenges.

Recently, for example, Elon Musk and 116 founders of robotics and AI companies signed a letter to the UN asking the organization to find a way to limit weapons control by autonomous robots.

So, can humans design a safe future living alongside artificial intelligence?

Max Tegmark, a professor of physics at MIT and author of a new book, Life 3.0 — Being Human in the Age of Artificial Intelligence, says to make sure humans stay in charge, we need to first envision what kind of future we want and steer artificial intelligence in that direction.

“The most interesting thing [about AI] is not the quibble about whether we should worry or not, or speculate about exactly what’s going to happen, but rather to ask, 'What concrete things can we do today to make the outcome as good as possible?'” Tegmark says.

“Everything I love about civilization is the product of intelligence,” he continues. “If we can amplify our own intelligence with AI, we have the potential to solve all of the terrible problems we’re stumped by today and create a future where humanity can flourish like never before. Or we can screw up like never before because of poor planning. I would really like to see us get this done right.”

Tegmark came to his Life 1.0, 2.0, 3.0 classification through an idiosyncratic way of asking the question, "What is life itself?"

“What’s special about living things to me isn’t what they are made of, but what they do,” he says. “I’m just a blob of quarks, like all other objects in the world. ... So, I define life more broadly as simply an information-processing entity that can retain its complexity and replicate.”

Bacteria, the prime example of Life 1.0, can’t learn anything during their lifetimes. They can only evolve over time due to changing conditions. So, “when a bacterium replicates, it’s not replicating its atoms, it’s replicating information — the pattern into which its atoms are arranged. So, I think of all life as having hardware that’s made of atoms and software that’s made up of bits of information that encode all its skills and knowledge.”

Life 2.0 is humans. We’re stuck with our evolved hardware, but we can learn and change by essentially choosing to “install new software modules,” Tegmark says. “If you want to become a lawyer, you go to law school and install legal skills. If you choose to study Spanish, you install a software module for that. I think it’s this ability to design our own software that’s enabled cultural evolution and human domination over our planet.”

Life 3.0 is life that can design not just its software, but also its hardware. This type of life can “become the master of its own destiny by breaking free from all evolutionary shackles,” Tegmark says.

As for AI’s potential to transform human existence, Tegmark says it’s up to us to ensure this happens in a positive way, because "if you have no clue what sort of future you’re trying to create, you’re very unlikely to get it.”

“How do we take, for example, today’s buggy and hackable computers and transform them into robust AI systems that we really trust,” he asks. “Maybe it was annoying the last time your computer crashed, but [imagine] if that was the computer controlling your self-driving car or your nuclear power plant or your electric grid or your nuclear arsenal.”

Also, looking further ahead, how do we figure out how to make computers understand our human goals? “As we know from having kids, them understanding our goals isn’t enough for them to adopt them,” he explains. “How do we get computers to adopt our goals? And how do we make sure they keep those goals going forward?”

Tegmark says that while he rolls his eyes at a lot of AI movies these days, “‘2001’ beautifully illustrated the problem with goal alignment. Because HAL was not evil, right? The problem with HAL wasn’t malice. It was simply competence and misaligned goals. The goals of HAL didn’t agree with the goals of Dave, and too bad for Dave. ... We want to make sure that if we have machines that are more intelligent than us, they share our goals.”

Tegmark disagrees with those whose fear of AI’s potential to wreak havoc on humanity leads them to want to hit the brakes on the entire idea. “I don’t think we should try to stop technology. I think it’s impossible,” he says. “Every way in which 2017 is better than the Stone Age is because of technology. Rather, we should try to create a great future with it by winning this race between the growing power of the technology and the growing wisdom with which we manage it.”

This presents humanity with a great challenge, however.

“We’re so used to staying ahead in this wisdom race by learning from mistakes,” Tegmark says. “We invented fire, and — oops. Then we invented the fire extinguisher. We invented cars, screwed up a bunch of times, and we invented the seat belt and the airbag. But with more powerful technology, like nuclear weapons and superhuman AI, we don’t want to learn from mistakes. That’s a terrible strategy. We want to prepare, do AI safety research, get things right the first time, because that’s probably the only chance we have. I think we can do it if we really plan carefully.”

This article is based on an interview that aired on PRI’s Science Friday with Ira Flatow.


©2017 Science Friday