AI researcher Melanie Mitchell has weighed in on the superintelligence debate. (Mitchell, 2019-10-31) In her NYT article, she references two other AI researchers who have also tossed in their two-bits on this issue: Stuart Russell and Nick Bostrom.
I’m in the process of reading Russell’s latest book (Russell, 2019), and I read Bostrom’s book on superintelligence (Bostrom, 2014) when it came out in 2014.
I have not yet purchased Mitchell’s latest AI book (Mitchell, 2019), but I do know from past experience that she’s an excellent writer and thinker. (I read her book Complexity: a guided tour (2009) about a month ago.)
Anyway, Mitchell is taking on the definition of superintelligence used by both Russell and Bostrom. (Bostrom 1998)
According to their definition, a SuperAI could be given an objective (e.g. make paperclips), and it would then slavishly follow that objective to it’s logical conclusion (e.g. an Earth consisting entirely of paperclips). Notice that this problem has been know for a very long time. It is very similar to the one presented in stories about finding a bottle with a magical genie inside. You get three wishes, but they always turn out very badly.
Mitchell, on the other hand, posits a definition of superAI that’s close to the one put forward by her long-time friend and mentor, Douglas Hofstadter. Mitchell and Hofstadter assert that a SuperAI would have to have “common sense” and that, as such, it would not slavishly follow out objectives to an absurd conclusion – such as making paperclips until the Earth becomes a giant ball of paperclips. In other words, a SuperAI (or magic genie) with common sense would not slavishly follow out wishes that result in terrible consequences.
My take on this debate is twofold:
1) To help resolve this debate, we need to hammer out a better definition (or definitions) of superintelligence. (Russell and Mitchell may be willing and able to do this. They are both very reasonable, IMO).
2) We need to develop a better understanding of what we mean by common sense or, dare I say it, enlightenment. If we’re going to develop a SuperAI with common sense, the latter may well translate into some form of enlightenment. So, in that sense, we’re striving for SuperEnlightenment.
Finally, we’ve recently seen an example of a game playing AI – AlphaGoZero – that learns through self-play to play once given the rules of the game. This is similar to the “seed-AI” notion that scares Nick Bostrom. A seed-AI begins with a basic capacity to learn, but it then proceeds to become a SuperAI by improving itself – i.e. both it’s capacity to solve problems and to learn.
AlphaGoZero may seem to do this, but it is only in the context of the game of Go. (Beshears, 2019-09) So, AlphaGoZero is not aware of its own existence, or anything else aside from the game of Go. It’s a boxed-AI.
But, as a boxed-AI, it does slavishly follow the objective programmed in by its creators: defeat your opponent at the game of Go. This objective does not require much in the way of enlightenment.
However, humans have designed other games that do call for a degree enlightenment. The best known game in this category is the Prisoner’s Dilemma. You can find more details in the reference section below. But, suffice it to say, this game was designed during the cold war with another much more serious “game” in mind – the nuclear weapons arms race.
The analogy and the moral of the story was simple enough for most humans to understand: the only way to win a nuclear war is to not fight one in the first place.
So, we want our world leaders, not to mention our SuperAIs, to have both the intelligence and the enlightenment needed to figure out the moral of Prisoner’s Dilemma game. Also, they would need the ability to apply, by analogy, the lessons learned in order to avoid having to learn from actually fighting out a nuclear war.
References by Author
AlphaZero: DeepLearning, Tree Search, Reinforcement Learning, and Self-play
Stuart Russell’s new standard model for guiding the development of machine intelligence
Dr. Bostrom is the founding director of the Future of Humanity Institute at Oxford University. HE is also a professor of Philosophy at the University of Oxford.
Superintelligence: paths, dangers, strategies
by Nick Bostrom
How Long Before Superintelligence
by Nick Bostrom
By a “superintelligence” we mean an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills. This definition leaves open how the superintelligence is implemented: it could be a digital computer, an ensemble of networked computers, cultured cortical tissue or what have you. It also leaves open whether the superintelligence is conscious and has subjective experiences.
Entities such as companies or the scientific community are not superintelligences according to this definition. Although they can perform a number of tasks of which no individual human is capable, they are not intellects and there are many fields in which they perform much worse than a human brain – for example, you can’t have real-time conversation with “the scientific community”.
Dr. Hofstadter is a professor of cognitive science at Indiana University.
Gödel, Escher, Bach: an Eternal Golden Braid
by Douglas Hofstadter
Dr. Mitchell is a professor of computer science at Portland State University.
Complexity: A Guided Tour
by Melanie Mitchell
Artificial Intelligence: A guide for thinking humans
by Melanie Mitchell
We Shouldn’t be Scared by ‘Superintelligent A.I.’
“Superintelligence” is a flawed concept and shouldn’t inform our policy decisions.
By Melanie Mitchell
Dr. Russell is a professor of computer science at the University of California, Berkeley.
Human Compatible: AI and the problem of control
by Stuart Russell
How to Stop Superhuman A.I. Before It Stops Us
The answer is to design artificial intelligence that’s beneficial, not just smart.
By Stuart Russell
References by Topic
Technological Singularity (definition of a Seed AI)
If a superhuman intelligence were to be invented—either through the amplification of human intelligence or through artificial intelligence—it would bring to bear greater problem-solving and inventive skills than current humans are capable of. Such an AI is referred to as Seed AI because if an AI were created with engineering capabilities that matched or surpassed those of its human creators, it would have the potential to autonomously improve its own software and hardware or design an even more capable machine. This more capable machine could then go on to design a machine of yet greater capability. These iterations of recursive self-improvement could accelerate, potentially allowing enormous qualitative change before any upper limits imposed by the laws of physics or theoretical computation set in. It is speculated that over many iterations, such an AI would far surpass human cognitive abilities.