Skip to content

Our Privacy Statement & Cookie Policy

All Thomson Reuters websites use cookies to improve your online experience. They were placed on your computer when you launched this website. You can change your cookie settings through your browser.

Artificial intelligence

AI’s bias problem is human

AI is only as good as – and a reflection of – the data we train it on.

A 2016 investigation by ProPublica determined that COMPAS—AI-driven software that models the risk of recidivism in offenders—was biased against people of color. A study on Google’s advertising platform, which is powered by deep-learning algorithms, found that men were shown ads for high-paying jobs more often than women were, while another study found similar issues with LinkedIn’s job ads.

Tech stories abound these days about areas as disparate as facial recognition software, loan approval, credit rating, and health care also being vulnerable to AI bias. The problem?

AI is trained on data sets that may contain inherent bias, and teams developing AI may be insufficiently diverse to recognize bias in their models. In short, the problem is human(s). Fortunately, the solution is also human.

Tonya Custis, PhD, is a Research Director in our Center for AI and Cognitive Computing. She recently sat down to talk about the challenges and opportunities of greater awareness and insight on AI bias and diversity – and how AI can make better lawyers.

Can AI be biased?

Tonya Custis: Artificial intelligence can absolutely be biased. AI is only as good as – and a reflection of – the data we train it on. Facial recognition software that is trained only on white males performs really poorly on black females. With this example you can see how maybe unintended consequences (can produce) a really bad consequence. It might have societal implications … you are not even aware of.

How can AI tools be trained to minimize unwanted bias?

Custis: The first factor in training AI tools to minimize unwanted bias is just awareness. Scientists who are training the models need to know that the model they have chosen for the task is appropriate for that task. They need to see all sides of the problem and make sure they are actually solving the problem they want to solve.

The next factor is diversity. The more diverse your team is the better you will be able to anticipate different things that might come up after the model is trained.

What is “explainable AI”?

Custis: Explainable AI is a current trend in AI research to make algorithms more interpretable or transparent to users. Currently machine learning methods are often incorporated into systems like a black box. People can’t really understand what’s going on inside them. This is both good and bad.

Sometimes it’s delightful and people are happy with the right answer. Other times the algorithm isn’t correct … so you get a wrong answer and people want to know why. Why is it suggesting this?

The goal of explainable AI then is to give an audit trail into what factors and features weighed into the algorithm’s decision to give that output.

Will “robot lawyers” replace human lawyers?

Custis: Robot lawyers will not replace human lawyers anytime soon. Lawyers do a lot of specialized intellectual tasks. They craft complex arguments. These are tasks that we are a very long way from mastering with AI. We can’t really train computers right now to do tasks that people aren’t good at, that we can’t get good training data for. Since the very nature of being a lawyer is to do a lot of things you don’t agree with people on, there is difficulty in getting signals for some of those tasks.

Our aim really is to help lawyers do their legal research faster and better. We can automate the easy parts and routine tasks that people have to do. We can augment that with the things that humans aren’t so good at, like sifting through a lot of information fast. Helping with those first stages of legal research will allow more time for attorneys to do the things that they actually went to law school for, the reason they probably became a lawyer, the things they enjoy. They’ll have the time to do those things better because we helped them do the easier tasks faster.


Learn more

Watch Tonya’s complete interview, and explore more insights on how AI will impact the legal profession.

  • Facebook
  • Twitter
  • Linkedin
  • Google+
  • Email

More answers