Skip to content

Our Privacy Statement & Cookie Policy

All Thomson Reuters websites use cookies to improve your online experience. They were placed on your computer when you launched this website. You can change your cookie settings through your browser.

technology

How can we avoid human error becoming algorithmic error?

Nayeem Syed  Assistant General Counsel at Thomson Reuters

Nayeem Syed  Assistant General Counsel at Thomson Reuters

If we don't learn to strain human error from algorithms, machine learning can't live up to its full potential.

In a recent post, I discussed cognitive bias in finance. As we train algorithms to help us improve decision quality, we must differentiate between errors in failing to follow decision rules and errors embedded within the decision rules.  The former stems from human susceptibility to depart from a set of decision rules because of compelling but irrational beliefs.  The latter refers to the risk that predictive models already contain the biases and logic fallacies of their human creators and machines are simply repeating – and possibly amplifying – them.

As the market for machine learning services matures, buyers and sellers will increasingly need to focus on effectively addressing that risk to ensure that as business logic is rapidly applied at lower cost, it truly improves decision-making.  Sellers will need to persuade buyers that algorithms will be constructed to minimize unhelpful bias.  Buyers will need to see this demonstrated. All parties will only incorporate derivatives of those outputs directly or indirectly if they have confidence that designers have addressed the risks of algorithmic bias.  Every participant in the value chain can draw on how it addresses this issue to demonstrate its products’ superiority.

Algorithms designed to apply decision rules at scale, not challenge decision rules.

With greater investment in artificial intelligence (AI), there is greater pressure to produce productized machine learning capabilities at both scale and speed. There are reasons for pause, but machine learning also presents an opportunity.  If we recognize the dangers of unchallenged bias, we can – when reducing a business process into an algorithm – work to identify and confront incorrect underlying assumptions and logic and redesign to avoid repeating human mistakes in addressing the wrong questions or the wrong data.

Uncovering biases present in training data and suggesting a method to “debias” the cognitive embedding will be well rewarded.  If we try to understand the pitfalls better, we can take the appropriate steps to prevent algorithmic bias.

Algorithms work well on a track and but not so well off-road.  Ensure we are asking what is reasonable of an algorithm.

Khalid Al-Kofahi, Vice President of  Research & Development at Thomson Reuters and head of its Center for Cognitive Computing, explains: “Humans excel at making decisions on subjects they have experience in and are able to be creative (e.g., by inference and by analogy) to totally new situations, while machines excel at making decisions on repeatable tasks (and scale much better than humans on these tasks), but are ineffective when applied to new situations, at least for now (new techniques, e.g., word embedding in deep learning,  promise to enable machines to ‘think’ by analogy).”

Nursing care and communication robot 'Robohon' sits on a table next to the bed of a resident at Shin-tomi nursing home in Tokyo, Japan, February 5, 2018. REUTERS/Kim Kyung-Hoon
Nursing care and communication robot ‘Robohon’ sits on a table next to the bed of a resident at Shin-tomi nursing home in Tokyo, Japan, February 5, 2018. REUTERS/Kim Kyung-Hoon

Training results are limited by the limitations of their trainers.  Ensure we have the necessary domain insights.  

We must honestly self-assess whether if we have the necessary specific domain knowledge to create a useful predictive model.  Not doing so may lead us to falsely believe we are able to predict events that we, in fact, cannot. We know that how a question is framed determines the answers received.  That is true with people as well as machines.  Machine learning is potentially more pernicious, as it permanently hard-codes the biases of the humans they are modeled on and then applies them at speed.

When great insights meet bad data, the results don’t do the former justice.  Ensure we are using trusted and complete data.

Different training data produce different conclusions . This is true with both large and small data.  Algorithms can only perform their operations within the defined boundaries in which they are set. For example, when validating a fund strategy, we may fail to include those funds that failed and only address those that survive to the end of the entire sample period. The result is that our conclusions over-reflect those funds that survived and we may draw conclusions about their strategies that are inaccurate.  With machine learning, if the training data fails to fully reflect the entire population data (convenient, inconvenient and incomplete), we risk producing potentially misleading results.

We must guard against over-focus on horsepower and under-focus on steering.  Ensure we are not seeking the computer to validate our human model.

Machines are better than humans at following instructions, but the concern with algorithmic bias is that undetected errors in the instructions can amplify our misunderstanding and result in us going very fast in the wrong direction.  Remediating bias is what data scientists and technologists must strive for, as it will be increasingly important to be able to demonstrate that the underlying model has been created without reference to immaterial factors.

The key insight in all of this is that the expertise of the human trainers and their superior human business processes remain critical to improving results.  Machines are as prone to bias as the human thinking it tries to replicate, but if we consciously choose to recognize and address this, we can truly improve our speed, cost and accuracy.

German Chancellor Angela Merkel shakes hands with a humanoid robot at the booth of IBG at Hannover Messe, the trade fair in Hanover, Germany, April 23, 2018. REUTERS/Fabian Bimmer
German Chancellor Angela Merkel shakes hands with a humanoid robot at the booth of IBG at Hannover Messe, the trade fair in Hanover, Germany, April 23, 2018. REUTERS/Fabian Bimmer

The good news is we have an ideal opportunity to review, challenge and redesign business processes to avoid misunderstanding the issues and then misaddressing machine learning toward the wrong questions.  Armed with this awareness, we can actually improve on the human process and help prevent human bias in the machine learning by implementing measures to improve the inputs to generate more accurate outputs.  We need to build a robust development process which seeks to uncover and manage the potential for embedded human bias.


Learn more

Our report Are you ready for blockchain? is available for complimentary download.

Explore our full suite of technology solutions.

  • Facebook
  • Twitter
  • Linkedin
  • Google+
  • Email

More answers