Skip to content

Our Privacy Statement & Cookie Policy

All Thomson Reuters websites use cookies to improve your online experience. They were placed on your computer when you launched this website. You can change your cookie settings through your browser.

Artificial intelligence

Tech must apply roboethics to keep AI from learning our mistakes

Alex Paladino  Global Managing Director, Head of Technology Practice Group, Thomson Reuters

Alex Paladino  Global Managing Director, Head of Technology Practice Group, Thomson Reuters

How can we safeguard against artificial intelligence learning our prejudices, biases and unsavory views?

Is artificial intelligence (AI) learning our bad behaviors?

As members of the news media swarm around the epidemic of sexual misconduct in the workplace, many of us have been shocked to see familiar faces and bold-faced names exposed as predators. Hopefully, that jolt is enough for those of us working in the technology industry to start taking a hard look at the inherent biases and insensitivities we may be building into the algorithms that increasingly power our world.

In 2016 alone, companies invested upwards of USD$39 billion in artificial intelligence (AI)-powered technologies, and analysts anticipate the technology will contribute up to USD$15.7 trillion to the global economy by 2030, making AI one of the brightest spots in tech for the foreseeable future. It’s also one that requires a new way of thinking about the role of corporate social responsibility and ethics in the programming process.

Ozgur Akin, Chairman and founder of Akinsoft, and engineers check an "ADA GH5" humanoid hybrid robot at Akinrobotics, the country's first-ever factory to mass-produce humanoid robots, in Konya, Turkey. REUTERS/Murad Sezer
Ozgur Akin, Chairman and founder of Akinsoft, and engineers check an “ADA GH5” humanoid hybrid robot at Akinrobotics, the country’s first-ever factory to mass-produce humanoid robots, in Konya, Turkey. REUTERS/Murad Sezer

What makes AI so exciting, of course, is the idea that it is technology that understands nuance and learns as more inputs are collected. An AI program designed to help doctors treat cancer, for example, can analyze thousands of medical journal publications, scores of electronic health records, and individual patient history to recommend a course of treatment. Ultimately, the software learns what protocols work best and which do not as it accumulates more and more information. That unique capability makes the technology “smart.” But it also allows it to be influenced by the humans that build it.

Take the recent experience of University of Virginia computer science professor Vincente Ordóñez-Roman, who, while building an AI-powered image recognition program, noticed that the program was unconsciously injecting the biases of his researchers into the program. Studying the phenomenon further, he found that the machine-learning software was more likely to associate women with images of kitchens, shopping, and washing while men were associated with images of coaching and shooting.

A similar pattern was found in a separate study conducted by researchers at Boston University, which used a natural language processing program to analyze text collected from Google News and found that the program amplified male/female gender stereotypes. In one example, when the researchers asked the program to complete the statement “Man is to computer programmer as woman is to X,” it replied, “homemaker.”

The fact is, AI learns from humans, absorbing any number of unconscious biases that can be coming from the gender, race, ethnic, or socioeconomic backgrounds of its programmers. In the case of the technology sector in the United States, that means AI is learning primarily from a workforce that is 68.5 percent white and 64.3 percent male, according to the U.S. Equal Employment Opportunity Commission.

These basic facts, paired with the revelations of widespread workplace harassment, should catapult the topic of “roboethics” to the forefront of tech companies’ AI agendas. As technology becomes more human, it also needs to be subjected to the type of ethical scrutiny that ensures we are not building algorithms that expose our companies to new legal, operational, and social risks. That means building internal ethics committees into the technology function and committing to rigorously testing the technology not just for its utility, but also for its neutrality.

San Marcos student Amaris Gonzalez takes a selfie with "Pepper" an artificial Intelligence (AI) project utilizing a humanoid robot from French company Aldebaran and reprogramed as an assistant for students attending Palomar College in San Marcos, California, United States REUTERS/Mike Blake
San Marcos student Amaris Gonzalez takes a selfie with “Pepper” an artificial Intelligence (AI) project utilizing a humanoid robot from French company Aldebaran and reprogramed as an assistant for students attending Palomar College in San Marcos, California, United States REUTERS/Mike Blake

As AI becomes relied upon for all manner of data processing and screening tasks, the stakes are high to get the ethics part of the formula right.

For example, a recent investigation conducted by ProPublica, tested for machine bias in a risk assessment software called COMPAS, which is used by the U.S. Department of Justice and National Institute of Corrections to conduct criminal risk assessments that help inform sentencing decisions for felony offenders. The analysis found the system disproportionately applied lower risk scores to white defendants than to black defendants. That kind of example demonstrates just how serious an impact machine bias can have if left unchecked.

There’s an old adage in the world of computer science: Garbage in, garbage out. It means the quality of any output is determined by the quality of the inputs.As an industry, the tech community needs to heed that warning when it comes to building the algorithms that are central to the growth of so many exciting new breakthroughs. The alternative is a future in which our digital counterparts are no less flawed than their human creators.


Learn more

In our 2018 AI Predictions report, our industry experts share their forecasts for how developing technology will shape our future.

 

  • Facebook
  • Twitter
  • Linkedin
  • Google+
  • Email

More answers