Skip to content
Thomson Reuters
AI

Is it possible for AI to be biased?

Artificial intelligence (AI) is starting to generate real change in the delivery of legal services across the industry. It is widely considered that the impact of legal technology, including AI tools, can bring largely positive change to organisations, but are there any negatives? If there are negatives, how can organisations tackle them?

Dr Paola Cecchi-Dimeglio, a behavioural scientist and senior research fellow for Harvard Law School’s Center on the Legal Profession and the Harvard Kennedy School, says that AI, machine learning and deep learning can be biased and that the industry is only now becoming aware of it.

Dr Paola adds that it is evident from computer science and technology research that data and algorithms can carry biases and those biases can disproportionately affect some people within a group, much like traditional biases. For example, researchers have identified that software which seeks to predict the likelihood of certain crimes being committed in the future, based on gender and race, can be biased towards certain groups, especially minorities.

The reasoning for this, Dr Paola claims, is because the system in itself ‘is already tainted with biases, so any other data or trends extrapolated from that system contain these biases — so the new data would be based on the pre-existing biases within the old data.’

Therefore, if an algorithm is built with that data, and if the data is just taken at face value without realising how easily biases can disadvantage certain groups, then the machine learning and the deep learning that comes out of that data will keep confirming an incorrect image of society because of these biases.

How organisations can combat bias within AI

In order to prevent the continuation of such biases, there are two points to consider:

  • Determine how accurate your data is and see if it is treating members of the group of people you are examining differently and check for how accurate it is for the larger group overall. In a sense, the question you should be asking yourself is, how inaccurate is my data?
  • And secondly − controlling for these biases. That’s why it’s vital, when trying to examine your data, that you have professionals whom understand this issue, and can design algorithms so they first, understand the diversity of the group; and can control well for inaccuracies and for mislabelling of people that might occur.

Dr Paola said that experts in the criminology, sociology and statistical fields acknowledge that some AI software being used is naturally tainted by racial and gender bias, and warns that these biases ‘have to be controlled’.

“So, even as you set out to determine something from your existing data, you have to consider that, whatever the variable that you will use to define your data, is it a neutral one? That is not an easy task, because data are never neutral and there is never a neutral way of looking at things,” Dr Paola added.

“If you think about a simple question, such as how many convictions a person has received for crime, and you plan to take that as a neutral variable, you should think twice. You need to more closely examine what you’re asking the algorithms to do. In this case, you’d have to understand — and embed that understanding in your algorithms — that the number of arrests among minority group members may be disproportionate compared to the number for non-minority group members living in the same area”.

Without closely examining and controlling for this bias, Dr Paola concluded, ‘you are continuing to perpetrate the biases’ because the algorithms ‘have lost the context and take at face value what the data says’.

To read the full article, go to Thomson Reuters’ Legal Executive Institute.

Understanding corporate clients’ key strategic priorities in 2024 Introducing AI-Assisted Research on Westlaw Edge UK with CoCounsel  3 reasons legal professionals need Westlaw Edge UK with CoCounsel Advanced CLM: Unlocking the future with AI and Document Intelligence AI-powered contract analysis Three reasons why generative AI will not take over lawyer jobs Generative AI for in-house counsel: What it is and what it can do for you Legal AI tools and AI assistants: must-have for legal professionals AI’s impact on law firms of every size AI made big strides in 2023 – what does 2024 hold?