Skip to content
Cognitive computing

Learning from data: Building brains … and using them

Dr. Andrew Fletcher  Director, Thomson Reuters Labs™ – London

Dr. Andrew Fletcher  Director, Thomson Reuters Labs™ – London

We like our machines to feel human, even if they don’t look it. The pulsing on and off of the power light on an Apple computer when it is “sleeping” is reassuring. Even the red light of HAL in 2001: A Space Odyssey gave an assurance that the machine was alive, rather than a faceless menace. One of the pioneers of computing, Alan Turing, was amongst the first to address the challenge of artificial intelligence and gives his name to the Turing test for a “machine’s ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human.”

Learning from our mistakes makes us human. Learning is a fundamentally important part of how we make decisions, how we build things and how we create value. But learning is not limited to humans, or indeed just to animals. Since the middle of the twentieth century, machine learning has advanced to levels that would once have been thought science fiction, and is transforming whole industries. The latest advances include self-driving cars, a computer that can beat humans at Jeopardy! and real-time machine translation that doesn’t seem too far from the universal translator in Star Trek.

Pretending to be clever

That definition, of behavior which is indistinguishable from a human, is an ever-moving target. Those examples, which until recently were the stuff of science fiction, are now counted as soft Artificial Intelligence (AI). Pretending to be clever, these machines are “inspired by, but don’t aim to mimic the human brain,” instead using vast amounts of data and computing power to come up with results that “we tend to associate with human intelligence.”

So, a self-driving car can be really accomplished at that task, but wouldn’t know how to begin if it was suddenly put in charge of an airplane. A computer could triumph at Jeopardy! but then have no idea what to do if you proposed a game of chess as a re-match. This doesn’t diminish the accomplishments, but as the boundaries are pushed, so the benchmark for true AI also becomes harder to reach.

AI as a “Mixed Martial Art”

A defining question, as machines become smarter, is what this means for us? Highly respected individuals including Professor Stephen Hawking and Elon Musk have come forward warning of the dangers of machines that can think for themselves, surpass our intelligence, and develop without our intervention. As a more immediate challenge to face, MIT’s Erik Brynjolfsson and Andrew McAfee wrote their book The Second Machine Age to address the question of what this means for our jobs, as more and more advanced work is capable of being automated.

Garry Kasparov, the chess Grand Master who was famously defeated by IBM’s Deep Blue in the 90’s told how, rather than drawing the conclusion that humans would never beat computers at chess again, he realized that he would have performed much better if he had access to the same chess programs that Deep Blue did. He pioneered freestyle chess matches as a ‘mixed martial art’ where players can compete solely as humans, as machines, or as ‘centaurs,’ teams which combine the best of both and are more effective as a result. Brynjolfsson and McAfee also champion this concept of working with machines, to enhance human capabilities.

AI is growing rapidly as a tool that forms a vital part of knowledge work. You wouldn’t want to replace your doctor or your lawyer with a machine, but it won’t be long before you are questioning the capabilities of a professional who, like the centaur chess teams, is not using every tool at their disposal to do a better job than ever before. As part of their Watson University Programs, IBM is challenging students to put Watson to work. In 2015 students from the University of Toronto built Ross, the super intelligent attorney designed to help lawyers with their legal research. Tapping into the vast legal resources at their disposal, a lawyer can ask Ross a question, throw up results they couldn’t possibly have had time to access on their own, and give Ross feedback so it continually learns and improves the results it gives.

Paul Lippe and Dan Katz make 10 predictions about how Watson will impact the law, including catalyzing ‘better organization of legal information and legal data.’ This is a trend that is true across the spectrum of knowledge work; machines identifying inconsistencies in data that in turn driving better data that enables further insights.


Infographic shows how artificial intelligence (AI) can solve business problems

Source: Narrative Science.


Building brains

So to the big question: Will artificial intelligence ever be so advanced that machines don’t need us anymore? It’s easy to focus on the superiority of machines, and the dangers of human error. The safety record of self-driving cars obscures the many times the human backup has had to take over. There have been tragic examples where the human pilots of planes have been taken by surprise when the autopilot suddenly needs to hand back control. Humans need opportunity to practice, ready for the moment they need to intervene. Otherwise, the consequences are a catastrophic loss of that knowledge and ability, as vividly portrayed in the E.M. Forster novel The Machine Stops.

Professor Steve Furber is creating SpiNNaker, a new kind of computer architecture for the EU Human Brain Project (with similar objectives to the US BRAIN Initiative) that directly mimics the human brain. To pass the Turing test, with “intelligent behavior equivalent to, or indistinguishable from, that of a human” these projects will need to be more than just a reproduction of how our brains work, but an understanding of how to learn, adapt and change their behavior. To have full AI a machine needs a brain, but it also needs an understanding of how to use it. Otherwise, machines will be fallible, albeit in a different way, and “machine error” will be at least as dangerous as “human error.” For the foreseeable future in our working lives, AI will be best performed as a “mixed martial art,” taking the best of human and machine to push the boundaries of what we can accomplish.

Will artificial intelligence remain dumb? Or can we build brains that truly learn from data?

The Data Science Insights talk, chaired by Axel Threlfall, Reuters Editor at Large, took place at the Royal Institution of Great Britain, London, and featured Steve Furber CBE.


Learn more

Visit Innovation @ ThomsonReuters.com to learn more about how we are pairing smart data with human expertise and how you can get involved.

About the series

Data Science Insights is a series from Imperial College’s Data Science Institute, in partnership with Thomson Reuters. Through the series, guest presenters will share approaches and insights from data science in their organizations and how it makes an impact across the different markets that Thomson Reuters operates in. The events vary depending on topic, ranging from guest presentations, to interviews and panel discussions.

  • Facebook
  • Twitter
  • Linkedin
  • Google+
  • Email

More answers