Skip to content

Our Privacy Statement & Cookie Policy

All Thomson Reuters websites use cookies to improve your online experience. They were placed on your computer when you launched this website. You can change your cookie settings through your browser.

AI Experts

Dr. David Brin: Getting more AI than we bargained for?

When it comes to powerful AI, more (not less) is more. Author Dr. David Brin explains why strength in numbers is a good thing.

With a career that has spanned four decades, New York Times bestselling author Dr. David Brin has long distinguished himself not only through a powerful imagination, but also by asking probing questions about the future implications of technology upon society. A regular consultant and speaker to businesses and universities on topics ranging from national defense to astronomy to privacy and transparency, his insights have been sought by groups such as Northrup-Grumman, Johns Hopkins, the MIT Media Lab, Google, Microsoft, South by Southwest, IBM and more.

As a writer, Dr. Brin has shown a knack for accurately imagining the technological impacts years before they arrive on the scene: from the prevalence of mass surveillance and mobile cameras (as predicted in his 1997 book, The Transparent Society) to email spam and the use of the World Wide Web as a media news outlet (as featured in his 1990 novel, Earth), Dr. Brin has made a career of giving thoughtful consideration to the role we allow technology to play in our lives.

We approached Dr. Brin with questions concerning how likely it is that humans will be able to rein in super AI; how to ensure responsible AI engineering and prevent its abuse; and his opinion on the current state of development.

“I propose that the top priority in AI research should go into finding ways to divide AI into discrete and mutually competing groups or organisms. That’s what nature did with us a billion years ago. We’re separate beings who negotiate with each other. We form societies, but also compete and it’s worked pretty well.Dr. David Brin


ANSWERS: What do you see as the most significant element of artificial intelligence development within the next few years and on the flip side, what is the most over-hyped?

DAVID BRIN: Over the last 40 years we’ve seen artificial intelligence driven forward by Moore’s Law which observed that the physical computational power of our hardware would double every 18 months or so. It was a self-fulfilling prophecy largely because corporations and universities strove hard not to be left behind and thereby kept Moore’s law going. There’s an old saying that if something can’t continue forever it won’t. Moore’s law finally reached its tipping point or S curve where the rapid speed up of hardware finally tapered off. This happened about two years ago.

Meanwhile, for at least a half century software has lagged behind. It was always much more crude than our advancing machinery. This changed in what I’m calling ‘the big flip’ also in the last two years when machine learning and other evolving algorithm systems seemed to have really taken off. In answer to your question, we seem to be shifting from an era driven by hardware to one propelled by software. There are reasons to believe that we underwent a very similar transformation in evolving our own spectacular human minds.

Half a million years ago, our ancestors were in trouble but all they had to do was become creatures with a 100-word vocabulary, fire and stone tools. They became masters of the planet. They did this the hard way by creating magnificent processing machinery – the human brain. There is reason to believe that, starting 50,000 years ago, we’ve given ourselves fantastic software or operating system upgrades, periodically multiplying our vocabularies, our conceptual abilities and our cultural tools. If this is true, then what we call the European Renaissance was only one of perhaps 20 such re-programmings. So was the Age of Enlightenment that led to markets and democracy and science, and so is the current transformation we’re undergoing right now. Humans may be adapting to stay even with our own machines. The problem is each of these re-programmings comes accompanied by disturbance and a lot of fear. Many of our neighbors don’t want to get smarter. They will fight this transformation tooth and nail.

ANSWERS: How can we truly trust a machine that surpasses our intelligence and our place of intellectual dominance in the world? How can we trust a machine like that, especially if it exceeds our ability to control it?

BRIN: The question of what to do about artificial intelligence is not a new one. Parents wring their hands over the new beings that they bring into the world, who are very often much smarter, more powerful, and even on occasion will chant, “Destroy all humans!” We generally survive our children’s adolescence. If we’re wise and careful parents, we should be able to manage these new entities as well, but it will take care. The warnings that pour from Hollywood, and especially the better science fiction novels, are beneficial. They cause us to ask questions, hold meetings, re-evaluate our designs, and do interviews like this one. But we mustn’t panic.

Take a look at all these Hollywood portrayals of rogue AI. What they depict is powerful new beings re-establishing a tyrannical pyramid-shaped hierarchy of power – dominating us with the power of life and death over normal humans. There’s nothing new about that dilemma. It’s the condition under which almost all of our ancestors lived for 10,000 years. It’s called feudalism. It was noxious and evil back when it was imposed on us by human lords and kings. We managed to find some mental and social tricks that let us escape from that brutal way of life. Those same techniques might help us when the top part of the pyramid is occupied by genius cybernetic beings.

Dr. David Brin, author of The Transparent Society
Dr. David Brin, author of The Transparent Society.

ANSWERS: What steps need to be taken to prevent artificial intelligence from being used abusively?

BRIN: Science fiction, especially in the best novels, has explored many iterations of artificial intelligence. Naturally, because danger makes the better story, a majority of these tales start with a pessimistic premise. That’s okay. We want self-preventing prophecies.

What are all these various failure modes? One is that AI might empower human masters to assert dominance over us. That’s Orwellian, technologically enhanced and unstoppable. This is the openly stated aim of the Chinese Communist Party. They intend to foster and protect and control artificial intelligence under direct control of the Beijing Politburo. They proclaim that this will keep the AIs tame. Anyone who has a smidgen of science fictional awareness knows where that kind of power structure will end up – with the super smart cybernetic beings simply flipping a power pyramid that we conveniently set up for them.

Other downer scenarios include accidental AI as we see in the Terminator series. It may be that some hyper-intelligent entities already exist, that they are lurking and reading the things said by supposed pundits in sagacious interviews. In which case I have to ask, “Are you reading this out of serious interest or just for giggles?”

There are alternatives. For example, we know of only one intelligent life-form in the universe. The most significant thing about human intelligence is that it requires a long childhood spent interfacing with the world, crawling, walking, falling, experimenting. Our children are almost fetuses for 15 years (some would say 25). Let me just say, that’s extremely expensive. There had to be a reason why we emphasized such an extended and costly childhood. It may be that is the only way to achieve sapient intelligence. If that’s the case, then we’ll get proto AIs and stick them inside little robot bodies, fostering them into human homes to be raised as children.

They may grow up to be geniuses. They may even be the leaders of society because they’re so smart, but they’ll only get there after a childhood of being raised by adoptive parents. Here’s the deal. We know how to do that. If we can school ourselves to think of and raise these foster children well and to be proud of them, and to raise them as decent people who maybe even like their parents, then humanity has a soft landing. It won’t matter if some of us are made of silicon and metal. We’ll be people, and this will give us the stars.

ANSWERS: How do we prepare and promote transparency in AI research when there are many development groups out there scrambling for a competitive, first-to-market edge? How do we make sure that there’s responsible development going on, that people are talking, and that we’re not courting disaster with somebody developing something in a vacuum that could bring peril to us all?

BRIN: One thing is for certain. We will not have Asimov’s laws of robotics. I helped finish Isaac’s esteemed I, Robot universe. He knew that there is no way our civilization will go to the expense and incredible effort of pre-programming every cybernetic being with such compulsions as his Three Laws of Robotics, especially since (as Isaac himself showed) once they become super smart, they’ll just become lawyers and interpret the laws any way they like. That hints at what I believe is the real solution. Each of us has at times in our lives been threatened by powerful entities that are smarter than us, called lawyers. Many of us found a pretty good defense: hire another smart lawyer.

If we can find ways to keep robotic entities separated into truly competing units, then there is a good chance we can maintain some safety, freedom and influence for organic humans – another kind of soft landing. When one entity is threatening our interest, hire another one to oppose it.

Can it work with smart machines? We organic humans will still have power and influence for a long time if the robots and AIs are competing with each other. Some will see the advantages in being nice to us. That sounds very good in theory, but could it work in practice? All I can answer is that the last 200 years of the Western Enlightenment worked exactly that way. We divided up power. We prevented a takeover by conniving oligarchs – sometimes just barely – by pitting elites against each other. We managed to get the benefits of all these smart humans out there while preventing the kind of social shutdown that ruined every other civilization.

That’s why I propose that the top priority in AI research should go into finding ways to divide AI into discrete and mutually competing groups or organisms. That’s what nature did with us a billion years ago. We’re separate beings who negotiate with each other. We form societies, but also compete and it’s worked pretty well. If the AI that’s out there has any ambition to keep improving, he/she/it/they will likely do what I recommend in my novel Earth: be many.


Learn more

In our new series, AI Experts, we interview thought leaders from a variety of disciplines — including technology executives, academics, robotics experts and policymakers — on what we might expect as the days race forward towards our AI tomorrow.

  • Facebook
  • Twitter
  • Linkedin
  • Google+
  • Email

More answers