Skip to content
Cognitive computing

Cognitive computing: Under the hood

Bob Arens  Research Scientist, Thomson Reuters

Bob Arens  Research Scientist, Thomson Reuters

When we talk about “cognitive computing,” we usually refer to it as a single, discrete concept. It’s a new way for computers to interact with humans. It’s the next step in artificial intelligence.  In actuality, it’s these things and more.

But what we rarely consider is just how recently cognitive computing was developed; from a technology standpoint, it looks more like a collection of capabilities than a single, streamlined technology.

Much like early automobiles, one could be forgiven for mistaking a cognitive computing system for the equivalent of high-end engine parts assembled haphazardly in a garage. The parts that make up a cognitive computing system are individually quite complex, and together they contribute to a technology so advanced it looks, indeed, like magic.

Taking cognitive computing out for a spin

Let’s start by looking at a hypothetical interaction between a legal professional and a cognitive computing system. Our legal professional, named Jan, is conducting background research for a product liability suit. Specifically the engine of her client’s expensive luxury car exploded on the highway. The manufacturer claims the client did not properly maintain the vehicle, and thus they bear no responsibility. Jan’s assignment is to research similar cases to see how litigation might proceed.

Jan turns to her high-tech cognitive-boosted search agent, nicknamed Jamie. Activating voice input, she says: “Show me case law related to vehicle maintenance and liability.”

Jamie thinks for a second, and responds: “I’ve got more than 10,000 cases related to vehicle maintenance and liability. I suggest replacing ‘liability’ with ‘product liability.’” The conversation continues:

Jan: “Narrow it down to product liability.”

Search Agent Jamie: “I’ve got 591 results.”

Jan: “I’d like to filter that further. Do you have a suggestion?”

Search Agent Jamie: “Previous searches for ‘product liability’ include the term ‘negligence.’ Should I add that to the search?”

Jan: “Yes, do that; how many results now?”

Search Agent Jamie: “I’ve got 265 results.”

Jan: “That’s better. What if I add ‘engine?’”

Search Agent Jamie: “48 of the 265 results include the term ‘engine.’”

Jan: “Great, summarize the results by outcome, and we’ll start from there.”

There’s a lot going on in that back-and-forth. The search agent is listening to Jan’s requests, but is also filtering out irrelevant content (like “Great”) while recognizing implicit commands. When Jan says “Yes, do that,” she’s instructing the agent not only that “negligence” should be added to the search, but that the search should be re-run. The agent can look into past searches from other users to make suggestions, it can perform a hypothetical “what if I add…” search, and it can make summaries of the cases without specific directions about what it’s summarizing.

As humans, we see this as a very natural process, but to a computer, it’s anything but. Let’s see how this works from the system’s point of view, starting with a concept called “knowledge engineering.”

What is knowledge engineering?

Knowledge engineering (KE) is the cornerstone of cognitive computing. Without it, nothing gets done. This is because computers, unlike humans, have no inherent capability of associating pieces of information. You can give information to a computer about apples, bananas, and fruit in general, but on its own, it will never come up with the realization that apples and bananas are both fruit. It’s up to humans to create the resources (like taxonomies and ontologies) the system will use in order to “understand” concepts.

Knowledge engineering has the greatest impact of any component on the system at large; it defines the universe of data the system can operate over. In the case of Jan’s and Search Agent Jamie’s exchange above, for example, if the system doesn’t know that “product liability” is related to the larger concept of “liability,” it can’t reformulate Jan’s query.

Later components (especially natural language processing and machine learning) leverage KE resources, and are thus impacted by them. For this reason, it’s important to ensure early on that either KE assets exist to cover all requirements of the system, or that adequate time and money are set aside to develop those assets.

Finding: Human computer interaction, natural language processing, and search

The first place KE assets will be pressed into service is in finding, and by “finding,” we mean more than just “search.” This part of the cognitive computing flow also involves how the user searches, and how the system aids the user in searching.

Cognitive computing shows great promise in evolving how a human interacts with a computer. The field of human computer interaction (HCI) exists specifically to study these kinds of interactions. HCI professionals analyze, develop, and craft a user’s complete experience with a system. It’s their job to know what a customer wants in an experience, and to ensure they get it in such a way as to be barely noticeable. Good HCI permeates an entire user’s experience, and bad HCI does the same thing.

One of the most intuitive interfaces of all is language. It’s up to natural language processing (NLP) components to interpret language input in a way that the system can use it for its tasks. In our example, the voice interface Jan uses is definitely an NLP component. It both understands Jan’s input and creates output for her, but it’s far from the only NLP component, and far from the most complex. Another NLP component needs to take her commands (“Show me case law related to…”) and act on them properly.

Search is, perhaps, the most obvious technology presented here. While it’s both extremely important and ubiquitous on the Internet, it’s also poorly understood precisely because it’s so familiar.

Search is a balance between completeness and over-completion, finding all the relevant information without burying the user in irrelevancies at the same time. Search itself is often aided by other components (especially NLP and machine learning, which we’ll see later) to separate out relevant data, as well as processing the search results for later application.

Analyzing: Machine learning

Once information is found, the components come together to process and synthesize that information to generate responses. The major component at work here is machine learning (ML), which allows the system to make decisions without explicit programming.

To recap (again using our example above) Jan’s requests, having been received by the system, are translated into actions. Machine learning was present in many of those actions, but we’ll focus on the system recommending she change “liability” to “product liability.” There are an infinite number of ways the system could suggest she change her query, but it wants to give her a suggestion that’s both relevant to what she’s doing, and likely to reduce her search results.

First, the system generates more specific queries based on the original. It uses KE and NLP resources to recognize that “liability” is a general term with more concrete concepts related to it. It then produces candidates to replace the term. One of these (“product liability”) is more likely to be associated with “vehicle maintenance” than others (such as, say, “attractive nuisance.”) The system knows this due to machine learning – somewhere, there’s an ML model which, given two arbitrary concepts like “vehicle maintenance” and “product liability,” produces a score indicating how related they are. While the system may have never seen that exact pair of concepts before, the model leverages resources and training to produce a score nonetheless.

Next, the system simulates what would happen if the query were transformed with each suggestion. Running these search simulations lets the system know what effect the suggestions will have on Jan’s results.

Deciding

There are many decisions to be made in an interaction with a cognitive computing system; indeed, the idea is that more decisions will be made before getting to hard results than with a traditional search system. A cognitive computing system works with the user to refine their information needs proactively, not waiting for the user to realize help is required.

Continuing from our example, the system has many potential replacements for the term “liability” in Jan’s query. The search simulations tell us how many results the replacements generate. An ML model will tell us how similar each replacement is to the original term (“liability”) and to the other core concept in the query (“vehicle maintenance”). Then, a ranking algorithm (probably aided by another ML model) takes these pieces of information and tries to find the replacement with the highest similarities, and at the same time, the smallest result set. The best replacement, “product liability,” is then presented to Jan as a suggestion, with NLP and HCI components crafting the response.

The secret to streamlining: Evolving the design

Cognitive computing will emerge as a cohesive, game-changing technology; much like the modern car.
A major evolution occurred between the first Porsche model (pictured at the beginning of this article) and the Porsche 718 Boxster S, as seen here during the 2016 New York International Auto Show media preview in Manhattan, New York, March 23, 2016. REUTERS/Eduardo Munoz

With time, work, innovation, and a few mistakes, the car gradually emerged as a single machine with several components, as opposed to several components working as a machine.

A similar evolution will take place in cognitive computing. Currently, each component technology is optimized for its own individual use. Going forward, we expect to see these components engineered for interaction with other components, strengthening connections and improving results as the design matures. The Thomson Reuters Center for Cognitive Computing houses experts in every technology mentioned here, empowering Thomson Reuters to direct this evolution together with our customers and partners. It’s going to be an exciting journey.


Learn more

The Center for Cognitive Computing’s mission is to accelerate and drive the development of cognitive capabilities, and impact how knowledge work gets done. The globe-spanning collaboration is headquartered in Toronto, Canada, at The Thomson Reuters Toronto Technology Center.

  • Facebook
  • Twitter
  • Linkedin
  • Google+
  • Email

More answers