Skip to content

Our Privacy Statement & Cookie Policy

All Thomson Reuters websites use cookies to improve your online experience. They were placed on your computer when you launched this website. You can change your cookie settings through your browser.

AI Experts

WEF’s head of AI: Bringing leaders together around socially responsible AI

From professors to prime ministers, Kay Firth-Butterfield of the World Economic Forum is ensuring global leaders are talking about socially responsible AI development.

Given the massive potential for social and economic (and even military) disruption that artificial intelligence is poised to usher in, it is imperative that key influencers from governments and the private sector work together to examine the ramifications. Unbridled, unchecked AI development can create significant problems; a certain amount of transparency and collaboration must be embraced by all interested parties on the path towards artificial general intelligence because once that genie is out of the bottle, there’s no putting it back.

Kay Firth-Butterfield of the World Economic Forum (WEF) has been actively creating a variety of projects, programs and councils to provide space for leaders to dialogue on the ethics and impact of AI development. As the Head of Artificial Intelligence and Machine Learning at the Center for the Fourth Industrial Revolution at the WEF, Kay has a keen vantage point at the center of the global discussion on this emerging technology. She shared her thoughts on the possible blind spots that developers need to contend with and what has her excited and concerned about where AI is going.

One of the projects that I’m doing here at the World Economic Forum is a project with AI professors from around the world where we are trying to ensure that every student’s AI curriculum includes some training on ethics and the social impact of AI. The intent is that those people who are being asked to develop things or who are setting up their own businesses using AI have at least some background training on the ethical implications.”

Kay Firth-Butterfield, World Economic Forum


ANSWERS: What are some of the ethical dangers that artificial intelligence presents and what steps need to be taken to avoid those?

KAY FIRTH-BUTTERFIELD: I think that everybody would agree that we need to promote trust in artificial intelligence. We really want to increase the use of artificial intelligence because there are so many things that we’re going to be able to do with AI that will benefit humanity and the planet; think about some of the work that’s being done in better understanding x-rays, for example, or some of the work that is being done in cancer treatment.

At the same time, there are ethical issues that are coming up and we’ve started to see more of that recently. That’s understandable because as we begin to roll out development, then you’re bound to come across more of the problems. I tend to put the problems in four buckets:

  1. Bias
  2. Transparency
  3. Accountability
  4. Privacy

To give you an example, you need some bias in some data steps. For example, if you’re using AI in medicine and you want to look at solving difficulties for people with Korean genetics then you’re going to need a bias towards data with Korean genetics.

Generally, bias which comes in either through the data or through the actual creation of the artificial intelligence itself by the choices that the person who is creating AI make, can really lead to some deep ethical challenges. You only have to look in the United States, such as the ProPublica article around the problems with criminal justice sentencing.

ANSWERS: Do you think there will be a growing demand for greater explainability, particularly why an AI decided what it decided? What are the legal and compliance questions you see that raising for organizations?

FIRTH-BUTTERFIELD: The short answer is yes. If you actually look at the GDPR which is the massive piece of legislation around privacy and data that the EU put into effect, there’s an accountability and transparency piece of that. There’s also a need, as we start seeing what I call Embedded AI coming onto the market, for that transparency.

For instance, considering autonomous vehicles, if they cause accidents, then certainly lawyers are going to need to understand what was going on in the AI brain and the way that it was coded to come up with answers because otherwise, we can’t attribute liability. More and more as we see AI being used, more and more we’re going to need to attribute liability. If it reads x-rays incorrectly, how do you attribute liability? When you add in the hardware components (i.e. the car or the robot), who is liable? The hardware developer? The software developer? If it’s an autonomous system, is liability determined by the way the driver has taught it, or by the owner of the robot who’s taught it?

ANSWERS: Are there any ethical blind spots in AI development that have you concerned? What are researchers and developers ignoring perhaps at their (and our) peril?

FIRTH-BUTTERFIELD: There are some things that people have done as research experiments that I would say we need to think about carefully. For example, research that was done to see whether an AI could identify people who are gay by their facial features. Now, it could be argued that a government that didn’t like gay people might create their own technology in this space. Equally, it could be argued that maybe that’s a research experiment that we didn’t need to do because of the ethical ramifications of that, especially in countries where you could be killed for being gay. It is important for you to think about that if you’re the researcher involved.

Similarly, I was at a conference in China and one of the Chinese AI business CEOs created a welcome to the conference in the voice of President Trump. As part of it, he put the words in President Trump’s mouth in Chinese on the video.

All that was done using AI and it looked and sounded exactly as if President Trump was doing it. It was totally convincing. The speaker who came immediately after asked how would we feel if that had been done to Xi Jinping? There are some serious ethical things that we need to be thinking about if we’re going to produce AI that can fool the public in that way.

Kay Firth-Butterfield, Head of Artificial Intelligence and Machine Learning at the Center for the Fourth Industrial Revolution at the WEF

ANSWERS: What has you excited about all of the AI development that’s going on?

FIRTH-BUTTERFIELD: The things that have me excited are the way in which we are going to be able to use AI in the field of precision medicine so that we will be able to, with the help of artificial intelligence, know more about individuals and treat them individually rather than tossing a drug at them and hoping that it works in that population.

Another thing that excites me is the environmental benefits that are possible, such as DeepMind cutting Google’s data center cooling bill by 40%. That’s huge. Things like are exciting. If you think about it in the context of the environment, AI could help with things like oil flows. We know that you can get better use out of fuel if you optimize the use. Well, if you combine the Internet of Things and AI, then you can optimize the use on a second-by-second, minute-by-minute basis and that helps us to use less fuel while optimizing the use of that. There’re some absolutely fantastic things coming down the pipeline (if you’ll excuse the pun) that AI can do both for the human condition and for the environment.

ANSWERS: With many players across the globe exploring AI, what needs to be done to ensure that development is conducted in an orderly and responsible fashion and there are no rogue developments that could be problematic?

FIRTH-BUTTERFIELD: One of the projects that I’m doing here at the World Economic Forum is a project with AI professors from around the world where we are trying to ensure that every student’s AI curriculum includes some training on ethics and the social impact of AI. The intent is that those people who are being asked to develop things or who are setting up their own businesses using AI have at least some background training on the ethical implications.

One of the reasons why the Forum is perfectly placed to do this is that we have a global reach. For instance, we’re working with Carnegie Mellon University on coursework and what we’re developing with CMU might look a little different in another country but it would retain the same moral compass. These AI ethics courses need to be culturally responsive, whilst also teaching the basic components.

My work at the Forum also includes helping governments look for governance policy mechanisms around AI. It doesn’t necessarily have to be regulation, because regulation takes time but rather a focus on rethinking how governance would look. Another of my projects is with an European government, and together we will co-create best practice guidelines for their own procurement of artificial intelligence. Our hope is that it will ring a bell for everybody in their jurisdiction to say, “This is the government’s tolerance level. This is what they want to see when they’re buying AI in and so, therefore, this is the level that they would be expecting us to be working to.” A project here at the Forum is not just work for one country; the idea is that when we have successfully piloted it in one country, it will be offered to other countries for them to pick it up and adopt.

Another piece of work that I’m doing at the Forum is creating a global AI Council which will include prime ministers and presidents. For example, Theresa May has already said she will join it. It will include CEOs of major companies, CEOs of smaller companies and startups, leading academics and heads of civil society, and international organizations like UNICEF Fund and The Office of The High Commissioner of Human Rights. We’ll bring those people together to think about what future projects should be but also to think about what global governments’ mechanisms need to be in place.

ANSWERS: What is the Forum doing to educate the public about artificial intelligence?

FIRTH-BUTTERFIELD: We do a number of reports each year which we share. We have a whole work stream dedicated to the future of work because imagining how work will be in the age of artificial intelligence is one of the big things that people need to be thinking about. I will be starting a program around working with emerging economies to look at how AI could help them. We will look at what their strategies in regards to AI would be. And, we have a close connection with AI4ALL which is the foundation started by Melinda Gates and Dr. Fei-Fei Li to help young people to learn AI in an ethical way. With AI4ALL, there are lots of different initiatives going on across the Forum.


Learn more

Explore more of our new series, AI Experts, where we interview thought leaders from a variety of disciplines — including technology executives, academics, robotics experts and policymakers — on what we might expect as the days race forward towards our AI tomorrow.

  • Facebook
  • Twitter
  • Linkedin
  • Google+
  • Email

More answers