Skip to content

Our Privacy Statement & Cookie Policy

All Thomson Reuters websites use cookies to improve your online experience. They were placed on your computer when you launched this website. You can change your cookie settings through your browser.

AI Experts

Recognizing the limitations of artificial intelligence

Future AI may be super powerful but, as Dr. Joanna Bryson of the University of Bath relates, that still won’t make it a person.

The desire to bestow human life on inanimate material has been a component of our collective imagination since at least the days of Ovid. In his work Metamorphoses he relates the tale of Pygmalion, who sculpted Galatea out of ivory and besought her animation at the hands of Aphrodite. Two thousand years later, we still see that narrative trope playing itself out in stories such as Alex Garland’s Oscar-winning film Ex Machina, where an AI developer creates an autonomous female android named Ava as the key component of a Turing Test.

From marriage to murder, the finales of these and other similar stories range from wish fulfillment to cautionary tale, but the psychological underpinnings remain the same: the aspiration to take something intrinsically non-human (such as ivory or silicon) and humanize it. Yet, this begs questions such as, “Are animation and intelligence enough to convey personhood upon an entity?” and “What does it mean to be human?”

One researcher who has been thinking seriously over the past two decades about the ethics and psychology behind artificial intelligence development is Dr. Joanna Bryson, associate professor of Computer Science at the University of Bath. She recounted some of the surprising things she has learned along the way, and also framed for us what she sees as some of the chief areas of focus in the field of AI right now.

Unbelievably smart people who really know a lot about machine learning and things like that still somehow think that if they just add one more layer they’ll get something that’s like a person. That it has motivations and the phenomenological experience that a person has. Fruit flies have phenomenological experiences more similar to those of humans than any machine we’ll ever build, because an awful lot of phenomenological experience comes out of the way that our perception works as living creatures and the millions of years collectively behind those experiences.” Dr. Joanna Bryson


ANSWERS: What do you see as the most important work being done in the field of artificial intelligence today?

JOANNA BRYSON: There are two major areas that I would point to. The first is around incorporating artificial intelligence techniques into standard systems engineering, and developing a secure means to document design for accountability. The other is regarding integrating AI in society, as well as increasing the understanding of how smartphones, connectivity and other technologies are changing the world and our capacity to perceive things we’ve never seen before.

One of the things I do when I give talks is point to architecture and say, “Look, it used to be that anybody who was rich enough could build anything anywhere. That ended a few centuries ago. People figured out that city planning mattered, where you put a building mattered too much; they didn’t like it if your building fell over and killed people. Now, we have lots of licensing. We have lots of laws. We have processes and civic planning.” I just think that software engineering has gotten to that point, too. We need to consider how we engage with this technology.

The government has noticed and the software industry vaguely knows something is up. They notice large fines and things like that, but I think we need to get everybody on board with figuring out how to integrate in a standard procedure.

I think governance is a better term for it than regulation. Really thinking about, “How do we fit AI into society and where are the controls going?” One of the things changing because of this technology is that it empowers individuals so much that it’s not as clear that governments can have the same kinds of rules. Everything is transnational. You can’t do the standard things you used to do about having a country that has a company that makes sure that the revenue is distributed equitably, that adequate investment is going into the infrastructure and things like that. We don’t have that option because the stuff is happening transnationally.

We have to figure out solutions that are transnational, which may mean that national governments will still have a huge role but that role is going to be a little bit different from what it has been historically. They need to figure out new ways to negotiate with each other and possibly also with other entities. One of the things that I’ve been suggesting is we need to look at building treaties with big transnational companies.

The whole point of a treaty is that you have some kind of peer relationship and then you negotiate a treaty so that everybody knows what behavior is expected and what the costs are for violating those expectations. I think that’s the situation that we’re in with the big technology companies right now, and maybe some other organizations and industries I don’t know as well. We need to figure out how to have those kinds of conversations. I think that there’s a lot of different topics like this, such as how individuals are dealing with their own information, how they recognize if they’re being manipulated, and whether they care or not. From the micro to the macro, there’s a whole new set of problems.

AI development is coming at a faster rate but we’re also more empowered. All this technology is making us smarter. I personally can’t believe how fast progress is being made from year to year in terms of how much better informed policy makers and governments are about what’s really going on. I would say the tech industry and academics have been a little slower in a way because we were so sure we were right. We don’t usually have to change our self-concept whereas governments, that’s what they do. They’re constantly finding out about new problems and dealing with them. That’s their job.

ANSWERS: Is human intelligence the best model for designing artificial intelligence? Or is there a different approach that would be better (perhaps a hybrid approach of different types of natural intelligence, for example)?

BRYSON: Most AI we’re building is nothing like human intelligence. It’s just that we use machines to solve different sub-problems for us. Then people say, “Nobody’s working on real AI.” No, this is real AI. There’s a separate problem when you’re trying to do robot control or whatever, but the worst thing we could do is try to make something exactly like a person. First of all, we’ve got eight billion people so why? That would just be like a more expensive person.

How does that affect human rights? Rights are the way by which we defend each other, and we don’t always do a good job of that. When you’re building a machine, you want to build something that you can be accountable for. One fundamental way to reduce the ethical concern is just to make sure you can back things up. It’s something we know about in computer science. Take all the code you’ve written, all the memories you’ve stored and back them up onto another system. The first system is no longer a unique thing whose physical integrity you really have to worry about. We can reduce our obligation to any individual artifact that way.

On the one hand, that’s important but on the other hand it exposes us to a different kind of problem which is that because systems are so replicable we tend to think, “Let’s just have one best algorithm.” Using just one algorithm where we used to have diverse humans thinking different ways is fragile. There are too many different ways to work around just one algorithm. People can hack into it. It’s exposing you to very different kinds of threats than old-fashioned human fallibilities.

ANSWERS: In all your work on studying intelligence, what has been a surprising thing you have learned that has influenced how you approach the development of artificial intelligence?

BRYSON: The first thing was when I was as an undergraduate. I was sitting in a class about how the brain worked, and somebody explained the neural circuit that allows your eyes to track stuff around you. I understood every step. I could see how that would work and I said, “Wait but my eyes are moving, they’re a part of me, it’s my behavior. Yet there’s an automatic circuit that’s not under conscious control that is affecting my behavior; that is altering what I perceive. What does that mean for morality? Who am I? What does that mean about consciousness?” That was the thing that was the most mind-blowing thing I can remember.

A slow-burning topic for me has been when people would come up to me and say that it would be unethical to turn off a humanoid robot. If you have a robot that is shaped like a person, people tend to feel some obligation towards it. It doesn’t even have to work. It just has to be like a sculpture made out of motors. Regardless if it’s not working even slightly, and has no code, no knowledge, nothing, people still feel like they’re obliged to it. Unbelievably smart people who really know a lot about machine learning and things like that still somehow think that if they just add one more layer they’ll get something that’s like a person. That it has motivations and the phenomenological experience that a person has. Fruit flies have phenomenological experiences more similar to those of humans than any machine we’ll ever build, because an awful lot of phenomenological experience comes out of the way that our perception works as living creatures and the millions of years collectively behind those experiences. That has been something that has had a big impact on me.

Another thing for me has been understanding why different parts of the brain have different architectures. There are different kinds of neurons that are connected in different kinds of ways. Why hasn’t biology found one good answer? The answer is there can’t be one good answer because the problem is too hard. You optimize differently depending on the sub-problems you’re trying to solve. That was cool when I learned that.

There’s this big question which is “Why are humans the only ones with language?” How can we be the only ones with language when it’s so useful for cognition? I finally realized that culture and language is knowledge we give away. It’s actually something called a public good. A lot of people were saying that language was extra-Darwinian, because why else were we giving away this information? Darwin and Dawkins showed us that we compete all the time so why would we cooperate? If you read past the title of Dawkins’ The Selfish Gene the whole point is, while genes try to reproduce themselves, sometimes the best way to do that is by cooperating with a bunch of other genes. There are no single gene organisms. All of us are made out of unbelievable numbers of genes. We, as organisms, are made out of unbelievable numbers of organisms too. Cooperation is ubiquitous in nature, and it’s never a zero-sum game. It’s all about creating more and more. It’s not all about dividing up the pie; it’s also by how big can you make the pie.

So this is the biggest problem I’m working on now; how and when do people make the pie bigger, and how and when do we divide up the pie? AI may be part of the answer, because of course we’d like to just focus on growing a bigger pie, but if you divide it up too badly then some people don’t get enough to live, and obviously they will rebel against that. Everyone needs to understand that, and AI hopefully can help us understand and perceive more.  If we can help people see what’s really going on in the world–all people, with any amount of power–maybe we can all find better solutions. At least I hope so.


Learn more

In our new series, AI Experts, we interview thought leaders from a variety of disciplines — including technology executives, academics, robotics experts and policymakers — on what we might expect as the days race forward towards our AI tomorrow.

  • Facebook
  • Twitter
  • Linkedin
  • Google+
  • Email

More answers