Skip to content

Our Privacy Statement & Cookie Policy

All Thomson Reuters websites use cookies to improve your online experience. They were placed on your computer when you launched this website. You can change your cookie settings through your browser.

AI Experts

Microsoft’s ethics head: Tending to the unintended with AI’s social impact

For Mira Lane of Microsoft, the key to creating ethical artificial intelligence systems starts with asking the right questions.

When it comes to the threats latent in our AI-empowered future, we may owe less to the imagination of James Cameron and more to that of Franz Kafka. Rather than hordes of deadly cyborgs wielding automatic weapons, picture instead a future where a person is arrested for a crime but not told what the crime is, why he was arrested or by whom he was arrested; the more he tries to find answers, the more hopeless the situation becomes. Entertainingly existential absurdity, until it happens to you.

Written more than one hundred years ago, Kafka’s The Trial is instructive in the impact that a bureaucracy run amok can have on an individual. Now, replace the word “bureaucracy” with “artificial intelligence” – even in today’s landscape, where AI development is in relative infancy, we still encounter stories about the negative effects of data bias in our legal system and how sentencing is sometimes managed differently according to skin color, for instance.

As we turn over more of our decision-making capabilities to machines, we have to look at the far-reaching consequences (subtle as well as obvious) before turning the switch to On. It’s a concern very much on the minds of the team at Microsoft, who has recently enlarged its operations specifically focused on answering the ethical implications raised by artificial intelligence. Mira Lane, Head of Ethics & Society in Cloud & AI for Microsoft, gave us an insider’s look at the work the organization is doing on this topic.

“There’s a lot of discussion that needs to be had around the purpose of what is being built. Is it ethical, and is the technology appropriate for this problem space? Who will be impacted? What are the unintended consequences? This is a big part of what we’re trying to do, to really shift the conversation away from the technology and towards understanding use and impact. Ultimately it means thinking about responsibility and accountability.” Mira Lane, Microsoft


ANSWERS: Can you tell us a little bit about your role and what it means within the context of AI development?

MIRA LANE: This year, we’ve shuffled some pretty big things around at Microsoft. As we looked at the role Microsoft is playing in the artificial intelligence space, we felt it was important to expand the view of this team to be focused on ethics and society. So, my team is a multidisciplinary group focused on the ethical impact of emerging data technologies. We conduct research, we co-create with product teams to impact innovation through deep thinking around ethical issues, and we deliver tools, guidelines, and frameworks to help with scenario design, data collection, and bias detection. This is a new kind of effort at Microsoft. We’re tasked to do some deep thinking in the space and operate as a mix of a think tank and an applied group, so we take the ethics framework that we’re actively developing and apply it to how we design products and how we do business.

ANSWERS: How do you see moral / ethical considerations related to AI being negotiated and resolved in a society where building consensus is sometimes elusive?

LANE: I don’t think we have strong enough mechanisms in place, as a society, to really have these dialogues in a public setting. I see businesses grappling with the challenge all the time. It seems like every week there’s another headline about a challenge in this space.

At Microsoft, our first approach was to establish our ethics board. It’s called Aether, and it loosely stands for AI and Ethics and Engineering and Research. Its task is to recommend corporate policies and principles to address the more ethical and broader personal, social and societal issues in these data-centric models and technologies. We meet regularly to review technologies and policies. We have virtual teams across the company working on issues around fairness and bias and human-machine collaboration and much more. There are a number of these cross-company efforts.

We also meet with community leaders to understand the impact of our technologies. Where those dialogues may not have happened broadly in the past, we’re starting to have those to really understand how certain types of technologies impact groups that are more vulnerable and may not have had a voice in the building of technologies.

When you think about these types of AI technologies, they’re often hidden to users. We don’t always see what it’s doing to us. AI tech can be a power multiplier, and it can help people scale very quickly. These algorithms, they’re based on lots of data and the things you often hear is that the biases and the data are reflected in these algorithmic models; the biases and how we collect data as a society reflect in the models that are built. We need to be very thoughtful about how we design the systems.

You may have seen a post by Brad Smith (Microsoft’s legal counsel) calling on government to put legislation in place around facial recognition technologies. I think a big part of this is how the government must play an active role in this category of technologies. This is a new space and while technology corporations (and the ethics board we’ve created) will have a point of view in how we approach this, I think it’s going to take a partnership with governments and having more of the types of dialogues like we’re having with the community around the role of these technologies and how they play out.

ANSWERS: That leads to a question regarding the promotion transparency. How do we ensure and promote transparency in AI research among development groups that may be looking for a competitive or first-to-market edge?

“Who are we solving for? Who’s winning? Who will be impacted? What are the unintended consequences? What are the potentials for misuse and exploitation?” – Mira Lane, Head of Ethics & Society in Cloud & AI for Microsoft

LANE: This is interesting because I think that the AI research space is surprisingly more transparent compared to other technologies. What I mean by that is, this group of technologies, they started in the academic space and with researchers who were incentivized to publish and share their work. It didn’t start in corporate America and this is a very important distinction, as many techniques and breakthroughs are shared through academic papers. The questions I think we should be asking are:

  • Where does open collaboration help society?
  • Where can we do things like share common datasets so that we’re building off more fair, representative datasets and not perpetuating biases?
  • Where can we share ideas and methodologies to help us better work through issues of transparency, bias, and fairness?
  • How do we collaborate and operate in the competitive corporate space and improve how we collectively work?”

One of the things Microsoft has done is published a book called The Future Computed. It is a book around our view of how the future might unfold. I think we have to do more of this dialogue and be willing to break down some of the walls that traditionally are imposed in the corporate environment so we can share things around methodologies and datasets where that is really beneficial to everyone.

ANSWERS: What should AI developers and researchers be tackling now that they may be largely missing out on?

LANE: There must be more discussion around the purpose of what we’re building. Is it ethical, and is the technology appropriate for this problem space? Who are we solving for? Who’s winning? Who will be impacted? What are the unintended consequences? What are the potentials for misuse and exploitation? This is a big part of what we’re trying to do, to really shift the conversation away from the technology and into understanding use and impact and to thinking about responsibility and accountability. It is a question that you often don’t hear technologists talk about but I think that that’s the important one.

For a group like mine, a big part of our work is around culture change and education. It’s about working to dial up the sensitivity of people working on these technologies so that these ethical considerations are factored in from the very beginning, and they’re not coming out at the end when someone uses the product and there are unintended consequences.

ANSWERS: What would you say excites you the most about what artificial intelligence means for business and for society in the coming years?

LANE: It’s exciting to me that we’re finally having some real conversation across the industry on the societal impact of technology. We should’ve started these conversations a long time ago. This is the reason why we’ve been shifting my group to really look at impact to society as well. This is the commercialization of profound new technologies. It’s the right moment to look at how we might want to alter courses and define the next generation of experiences.

This whole notion of asking these questions and having this dialogue is one that I wish technologists had asked 10 years ago when we first started looking at the impact of the smartphone on how people work and operate. I think that it’s a long time coming, so that really gets me excited because I think that it’s the scenario where we have to be more responsible about what we’re building and what the impact of that is.

ANSWERS: Besides the work you do in computer science, you’re also active as a video artist. How do you think the creative process informs the work you do in AI development?

LANE: I think the connection is maybe a loose one. Creatives, like myself, we tend to like looking at where the edges are in technology. What I mean by that is asking, “How can one technology be repurposed for a different use?” We like to create things where we’ve hacked technology in a certain way or we’ve used a tool that was intended for one purpose and do something completely different with it. With that comes a certain mindset and orientation. When I look at technology, it helps me think through the various aspects and angles of that technology. I bring a level of lateral thinking that is just part of the creative brain and, as an artist, it just strengthens that sort of mindset.


Learn more

In our new series, AI Experts, we interview thought leaders from a variety of disciplines — including technology executives, academics, robotics experts and policymakers — on what we might expect as the days race forward towards our AI tomorrow.

  • Facebook
  • Twitter
  • Linkedin
  • Google+
  • Email

More answers