Skip to content

Our Privacy Statement & Cookie Policy

All Thomson Reuters websites use cookies to improve your online experience. They were placed on your computer when you launched this website. You can change your cookie settings through your browser.

disruption

Can machine learning unlearn cognitive bias in finance?

Nayeem Syed  Assistant General Counsel at Thomson Reuters

Nayeem Syed  Assistant General Counsel at Thomson Reuters

The human mind makes mistakes when it comes to finance, but machine learning does, too. What obstacles must both overcome?

We have heard much recently of the significant investment (and success) in building machine learning capabilities. Financial institutions are now striving to explore the full potential of artificial intelligence agents to replicate (and improve upon) human decision-making. With fraud detection, for example, the hope is to reduce the number of false claims and instances of money laundering. With trading, the hope is to reduce the number of underestimated risks and overestimated returns. With robo-advisors, the hope is to reduce research costs and portfolio misallocations.

Ideally, machine learning will produce superior outcomes at lower cost.  However, to what extent will machine learning capabilities avoid repeating human mistakes?  As we delegate more of our decisions to machines, can we look to eliminate more of our own cognitive biases?  We need to first understand them much better.

Bias is natural but, out of nature, often unhelpful

The human brain is very powerful but has, of course, limitations.  For example, it struggles to process fast-changing or large amounts of data.  It therefore developed many decision simplification filters and as a result, evolved to rely on complex forms of bias.  These biases help it assess and rationalize risks to determine which it can accept and which should be avoided.  Human societies now are more complex, and individuals face different threats.  They also face more and different decisions, but our cognitive abilities need much more time to evolve similarly.

When faced with too much information with too limited experience, we too often simply guess.  This is true even for experts, and it’s interesting when applied to financial professionals.  The trouble for the rest of us comes when experts are blind to their blindness.  Their cognitive biases can create errors which can then compound and potentially contribute toward systemic risk if they are used in developing, say, economic policies and financial supervision strategies or informing regulatory enforcement.

Our decisions are based on validated facts as well as unsupported assumptions.  The result is we don’t realize this is happening or the implications.

Unconscious bias

This is often called the mother of all biases because it is often a component of many other biases.  We may – often without realizing it – accept or dismiss an investment proposal based on preconceptions of the people proposing them.  To save time and effort, we pigeonhole proposers based on criteria such as age, gender and even perceived attractiveness.  We also subconsciously assign presumed traits as to ability and even honesty.

We may be less rigorous with a proposal from a proposer whom we identify closely with, perhaps imagining a bond because we attended the same school or university.  Alternatively, we may be overly rigorous with a proposer who didn’t attend university at all.  While unintended, it can nonetheless prevent critical self-challenge and disciplined appraisal.

We both under- and overestimate some people, opportunities and risks based on entirely irrelevant factors. The result is that we unknowingly mis-price risk.

Confirmation bias

The key to its perniciousness is that we may seek out evidence that agrees with us and are much harder on evidence that doesn’t support what we wish to believe.  When developing an investing thesis, we may overweight helpful anecdotal data and underweight unhelpful actual data.  Similarly, we may seek out only those that we agree very well with which can lead to group think, prevent constructive challenge and can create fatal blind spots.

We overestimate our objectivity, as well as our efforts to be rigorous. The result is a false sense of security.

Visitors take photos of the Robot Assistant Pepper by SoftBank Robotics during the Mobile World Congress in Barcelona, Spain February 27, 2018. REUTERS/Yves Herman
Visitors take photos of the Robot Assistant Pepper by SoftBank Robotics during the Mobile World Congress in Barcelona, Spain February 27, 2018. REUTERS/Yves Herman

Loss aversion

We may be swayed to make decisions based on how the proposals are framed in terms of potential gains rather than potential losses.  When selecting between choices with the same payoff, we may (be manipulated to) choose the one that is packaged and presented with the emphasis on the potential gains.  For example, investment A with a 100 percent chance of receiving US$10 is preferred over investment B which has an equal chance of US$20 and US$0.

We over-fear losing and require disproportionately higher potential gains before accepting a given level of risk. The result is that we forego reasonable opportunities and achieve lower average returns.

Hot hand or Gambler’s Fallacy

In sports, a player on a scoring hot streak is passed the ball much more in the belief that he or she is more likely to score than others.  Similarly, in relation to a high profile fixed-income investment manager, we set aside what we know about diversification and over-invest in their fund, or their exciting new specialty fund, without doing our full homework.  We just assume their success will continue and/or transfer over across entirely different investment strategies.

The opposite can also be true.  A coin has no memory and the chances of achieving a given result on a flip is the same each time.   However, many of us are tempted to think that after, say, nine consecutive heads, flipping a tail to be much more likely.  Equally, we assume an asset price correction is simply “overdue” without evidence.

We ignore that due to reversion to the mean superior performers are on average eventually caught by determined competitors. The result is we may make blind predictions as to when streaks (if they exist at all) will continue or come to an end.

Blind to our blindness?

Heuristics (mental rules of thumb) are probably unavoidable and perhaps essential in modern life.  They can however prevent us from correctly executing decision rules.  As we program machines to make more of our decisions, we have the opportunity to consciously design them to control for our biases.

The home-use social robot piBo is displayed at the Mobile World Congress in Barcelona, Spain, February 26, 2018. REUTERS/Yves Herman
The home-use social robot piBo is displayed at the Mobile World Congress in Barcelona, Spain, February 26, 2018. REUTERS/Yves Herman

However, we must differentiate between cognitive biases and logic errors.  The former is an error in not following the underlying reasoning due to susceptibility to predetermined beliefs, mental short-cuts and ignoring contrary evidence.  The latter is, broadly, an error within the underlying reasoning.  In our next post, we will discuss algorithmic bias and the limitations of machine learning where logic errors are embedded in the decision rules.


Learn more

Our report Are you ready for blockchain? is now ready for complimentary download.

  • Facebook
  • Twitter
  • Linkedin
  • Google+
  • Email

More answers