Skip to content

Our Privacy Statement & Cookie Policy

All Thomson Reuters websites use cookies to improve your online experience. They were placed on your computer when you launched this website. You can change your cookie settings through your browser.

AI Experts

Author Vernor Vinge: Proposing a singular view of technology’s future

When the Technological Singularity arrives, you can’t even imagine what the future will hold afterwards. Just ask author Vernor Vinge.

A five-time Hugo Award-winning author (among various other awards and accolades), Vernor Vinge has been writing and speculating about AI and intelligence amplification for over half a century. As part of his storied career, an interesting anecdote concerns a rejection letter he received from legendary science fiction editor and publisher John W. Campbell, Jr.

Early in his career, Vinge had proposed a story about a human being with amplified intelligence and (as Vinge relates in his short story collection) Campbell wrote him back with the comment, “Sorry — you can’t write this story. Neither can anyone else.” Jump forward a few decades, and Vinge delivered a paper to NASA entitled The Coming Technological Singularity in which he foresaw a moment when artificial intelligence will develop exponentially until it reached a point that surpasses humanity’s ability to comprehend. It is intelligence so far superior that we can’t even imagine what it would be like. And then what?

To tap into a lifetime of study and thought on what super-intelligent technology could mean for society, we inquired of Vinge his thoughts on the dangers ahead in AI development, the ethics of creating sentient beings and if the Singularity is truly imminent.

“The most recent event that’s comparable to the technological singularity may be the rise of humankind within the animal kingdom. If you recast the various questions that we have about the rise of super intelligence in terms of that event, the answers are striking. Vernor Vinge


ANSWERS: This year marks the 25th anniversary of your paper, The Coming Technological Singularity. If you were to write it today, what might you change?

VERNOR VINGE: I don’t think I would make many changes in the paper. I had first used the term “singularity” in this sense at an AAAI meeting in 1982. During the following 10 years, I had lots of opportunities to bounce the idea off some very smart people. The result was that the 1993 NASA essay is really a very nice overview that I am still happy with.

In the essay, I discussed several paths to superhuman intelligence. From year to year, I might change my emphasis on these: Besides pure machine artificial intelligence, there was intelligence amplification of humans by computers, super-intelligent teams of computer networks plus human users, and pure bioscience improvement of human intelligence.

ANSWERS: Do you think an artificial general intelligence (AGI) singularity is a foregone conclusion, and do you think it’s relatively near, given where we’re at with technology today?

VINGE: So when I say technological singularity, I mean the rise, via technology, of superhuman intelligence. Within the purview of that, the pure AI scenario is just one path. There are several other ways that superhuman intelligence via technology could happen. But one way or another, I think the singularity is the most likely non-catastrophic thing that’s in our future. In fact, probably the only thing that could prevent the singularity is one of the other existential issues that we worry about, like general nuclear war. So, I regard the singularity as very likely, and sooner than most predict: in the absence of some worldwide disaster, I’d be surprised if the singularity does not occur by 2030.

ANSWERS: What are your thoughts on the ethics of creating potentially sentient beings that we in effect treat like servants and tools? Do you see an AGI willingly cooperate with that subservient role, especially if it doesn’t really think that that’s in its best interest?

VINGE: One good reason for the term “singularity” is that it’s a type of event that is intrinsically harder to predict beyond than other sorts of technological events. In fact, some of the strongest tools for thinking about it may simply be analogies with the past. The most recent event that’s comparable to the technological singularity may be the rise of humankind within the animal kingdom. If you recast the various questions that we have about the rise of super intelligence in terms of that event, the answers are striking.

For instance, our debating the ethical treatment of AIs is a bit like a bunch of chimpanzees trying to decide how to most ethically treat newly-arrived humans. It’s not a practicable issue in that direction. As you imply, the super intelligences that show up will be the ones who face the ethical choices.

I think it was the statistician I.J. Good who came up with the notion of a Meta-Golden Rule: To the extent viable, you should treat your subordinates the way you would have your superiors treat you. We can hope that this is a characteristic of minds in the universe. In other words, that superhuman minds should recognize that they themselves are sitting somewhere along the chain of existence. There are, no doubt, disasters that could destroy our AI successors. If the ecosystem still supported humans (even pre-technological humans), we would most likely reinvent the machines. We humans and the biological ecosystem are a kind of safety net for the supers. So “green” arguments apply to the supers as much as to humans! It’s important that the machines recognize that there is a hierarchy of threats in the universe, and a hierarchy of backup solutions. (We can only hope that the supers will catch on to this faster than we humans have.)

Author Vernor Vinge

ANSWERS: What do you see as the biggest danger posed by the race to create an AGI – unchecked competitive development? Government intrusion? Or a super-powerful (and potentially uncontrollable) AI itself?

VINGE: For pure AI, I think Nick Bostrom’s book Superintelligence does a very good job with surveying the pure machine situation. In the last couple of decades there has been some very good thought put into the threats with regard to the pure machine AI singularity. Military arms races are probably a bad thing. Things that result in changes happening very, very fast are also probably bad things, although it’s not quite clear how to prevent such a hard takeoff.

But in thinking about the dangers and making things safer, we should look at the different paths to the singularity. For instance, most people nowadays talk about the pure machine (AI) approach to the singularity. Another path is Intelligence Amplification (IA) where the machine is an intimate aid to the human. If you could have a machine intelligence that extends your subconscious reasoning, or  interfaces with the Internet as easily as human memory is accessed by the human mind, then you have the possibility of the singularity actually being something that humans participate in. (Of course, each of the paths to the singularity has its own special threat scenarios. See, for instance, the trailer for H+: The Digital Series.)

A number of years ago, I had a conversation with Hans Moravec, who is a great thinker on these topics. We were talking about many of these issues, and I made my usual assertion about the unknowability of the world beyond the singularity. Hans pointed out that the sweeping onset of superintelligence is only unknowable if you’re not participating. With IA we can participate. He went on to say, “I intend to ride that curve.” In that case, the singularity remains unintelligible and unexplainable to the likes of you and me now. But if we became smarter entities through our participation in intelligence amplification, then we would understand what is going on.

Of course, in the long run, the machine part of the team would probably become more and more the dominant member. That’s not surprising, and in fact there are analogies in our own lives: for example, how could you explain your present day concerns to the zygote that became you?

ANSWERS: Do you foresee a forthcoming resource competition for the energy needed to meet human needs (food, clean water, clean air) versus meeting AI needs (for instance, electricity to generate computing power)? How do you see such conflicts being negotiated?

VINGE: This one of the major threat scenarios for the pure AI path to the singularity. Bostrom discusses the thinking on this at length, including software design defenses. Safe design is difficult when the new software is functioning beyond our mental horizon. At that point, higher-level design (such as Eliezer Yudkowsky’s notion of “friendly AI”) may be useful. A nebulous defense (not necessarily a disadvantage!) is the “Green” argument I make above.

This is again where thinking about alternative paths to the singularity could be helpful; the IA and Group Mind paths to the singularity look different with regard to the dangers you raise in this question. With IA and Group Minds, we are dealing with entities (mainly us humans) that already have a full-blown set of motives. Is this more dangerous or less dangerous than building a goal system from scratch? One way or another, the top players must learn to play nice – and on a rather short time scale.

ANSWERS: How do you see the concept of artificial intelligence capturing the public imagination in the next decade? We see it advancing already in our daily lives with certain products, movies, etc., but how do you see it evolving? Do think it will take center stage in the public imagination?

VINGE: In the absence of major events like general nuclear war (or the singularity itself!), I think public interest in the singularity will only increase. One obvious thing is the whole set of technological employment issues. Going forward, what is currently most successful/impactful will drive the conversation. I hope people will think about different paths to the singularity and related choices.


Learn more

In our new series, AI Experts, we interview thought leaders from a variety of disciplines — including technology executives, academics, robotics experts and policymakers — on what we might expect as the days race forward towards our AI tomorrow.

  • Facebook
  • Twitter
  • Linkedin
  • Google+
  • Email

More answers